Skip to content

TidesDB C# API Reference

If you want to download the source of this document, you can find it here.


Getting Started

Prerequisites

You must have the TidesDB shared C library installed on your system. You can find the installation instructions here.

Installation

Terminal window
git clone https://github.com/tidesdb/tidesdb-cs.git
cd tidesdb-cs
dotnet build

Custom Installation Paths

If you installed TidesDB to a non-standard location, you can specify custom paths using environment variables:

Terminal window
# Linux
export LD_LIBRARY_PATH="/custom/path/lib:$LD_LIBRARY_PATH"
# macOS
export DYLD_LIBRARY_PATH="/custom/path/lib:$DYLD_LIBRARY_PATH"
# Windows (add to PATH)
set PATH=C:\custom\path\bin;%PATH%

Custom prefix installation

Terminal window
# Install TidesDB to custom location
cd tidesdb
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=/opt/tidesdb
cmake --build build
sudo cmake --install build
# Configure environment to use custom location
export LD_LIBRARY_PATH="/opt/tidesdb/lib:$LD_LIBRARY_PATH" # Linux
# or
export DYLD_LIBRARY_PATH="/opt/tidesdb/lib:$DYLD_LIBRARY_PATH" # macOS

Usage

Opening and Closing a Database

using TidesDB;
var config = new Config
{
DbPath = "./mydb",
NumFlushThreads = 2,
NumCompactionThreads = 2,
LogLevel = LogLevel.Info,
BlockCacheSize = 64 * 1024 * 1024,
MaxOpenSstables = 256,
MaxMemoryUsage = 0, // 0 = auto (50% of system RAM)
LogToFile = false,
LogTruncationAt = 0
};
using var db = TidesDb.Open(config);
Console.WriteLine("Database opened successfully");

Creating and Dropping Column Families

Column families are isolated key-value stores with independent configuration.

using TidesDB;
db.CreateColumnFamily("my_cf");
db.CreateColumnFamily("my_cf", new ColumnFamilyConfig
{
WriteBufferSize = 128 * 1024 * 1024,
LevelSizeRatio = 10,
MinLevels = 5,
CompressionAlgorithm = CompressionAlgorithm.Lz4,
EnableBloomFilter = true,
BloomFpr = 0.01,
EnableBlockIndexes = true,
SyncMode = SyncMode.Interval,
SyncIntervalUs = 128000,
DefaultIsolationLevel = IsolationLevel.ReadCommitted,
UseBtree = false, // Use B+tree format for klog (default: false = block-based)
});
db.DropColumnFamily("my_cf");

Cloning a Column Family

Create a complete copy of an existing column family with a new name. The clone contains all the data from the source at the time of cloning.

using TidesDB;
db.CreateColumnFamily("source_cf");
// Clone the column family
db.CloneColumnFamily("source_cf", "cloned_cf");
var original = db.GetColumnFamily("source_cf");
var clone = db.GetColumnFamily("cloned_cf");

Use cases

  • Testing · Create a copy of production data for testing without affecting the original
  • Branching · Create a snapshot of data before making experimental changes
  • Migration · Clone data before schema or configuration changes

Renaming a Column Family

Atomically rename a column family and its underlying directory.

using TidesDB;
db.CreateColumnFamily("old_name");
db.RenameColumnFamily("old_name", "new_name");
var cf = db.GetColumnFamily("new_name");

CRUD Operations

All operations in TidesDB are performed through transactions for ACID guarantees.

Writing Data

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
txn.Put(cf, Encoding.UTF8.GetBytes("key"), Encoding.UTF8.GetBytes("value"), -1);
txn.Commit();

Writing with TTL

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
var ttl = DateTimeOffset.UtcNow.ToUnixTimeSeconds() + 10;
txn.Put(cf, Encoding.UTF8.GetBytes("temp_key"), Encoding.UTF8.GetBytes("temp_value"), ttl);
txn.Commit();

TTL Examples

long ttl = -1;
long ttl = DateTimeOffset.UtcNow.ToUnixTimeSeconds() + 5 * 60;
long ttl = DateTimeOffset.UtcNow.ToUnixTimeSeconds() + 60 * 60;
long ttl = new DateTimeOffset(2026, 12, 31, 23, 59, 59, TimeSpan.Zero).ToUnixTimeSeconds();

Reading Data

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
var value = txn.Get(cf, Encoding.UTF8.GetBytes("key"));
if (value != null)
{
Console.WriteLine($"Value: {Encoding.UTF8.GetString(value)}");
}

Deleting Data

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
txn.Delete(cf, Encoding.UTF8.GetBytes("key"));
txn.Commit();

Multi-Operation Transactions

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
try
{
txn.Put(cf, Encoding.UTF8.GetBytes("key1"), Encoding.UTF8.GetBytes("value1"), -1);
txn.Put(cf, Encoding.UTF8.GetBytes("key2"), Encoding.UTF8.GetBytes("value2"), -1);
txn.Delete(cf, Encoding.UTF8.GetBytes("old_key"));
txn.Commit();
}
catch
{
txn.Rollback();
throw;
}

Iterating Over Data

Iterators provide efficient bidirectional traversal over key-value pairs.

Forward Iteration

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
using var iter = txn.NewIterator(cf);
iter.SeekToFirst();
while (iter.Valid())
{
var key = iter.Key();
var value = iter.Value();
Console.WriteLine($"Key: {Encoding.UTF8.GetString(key)}, Value: {Encoding.UTF8.GetString(value)}");
iter.Next();
}

Backward Iteration

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
using var iter = txn.NewIterator(cf);
iter.SeekToLast();
while (iter.Valid())
{
var key = iter.Key();
var value = iter.Value();
Console.WriteLine($"Key: {Encoding.UTF8.GetString(key)}, Value: {Encoding.UTF8.GetString(value)}");
iter.Prev();
}

Seek to Specific Key

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
using var iter = txn.NewIterator(cf);
// Seek to first key >= target
iter.Seek(Encoding.UTF8.GetBytes("user:1000"));
if (iter.Valid())
{
Console.WriteLine($"Found: {Encoding.UTF8.GetString(iter.Key())}");
}

Seek for Previous

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
using var iter = txn.NewIterator(cf);
// Seek to last key <= target
iter.SeekForPrev(Encoding.UTF8.GetBytes("user:2000"));
while (iter.Valid())
{
Console.WriteLine($"Key: {Encoding.UTF8.GetString(iter.Key())}");
iter.Prev();
}

Prefix Scanning

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
using var iter = txn.NewIterator(cf);
var prefix = "user:";
iter.Seek(Encoding.UTF8.GetBytes(prefix));
while (iter.Valid())
{
var key = Encoding.UTF8.GetString(iter.Key());
if (!key.StartsWith(prefix)) break;
Console.WriteLine($"Found: {key}");
iter.Next();
}

Getting Column Family Statistics

Retrieve detailed statistics about a column family.

var cf = db.GetColumnFamily("my_cf")!;
var stats = cf.GetStats();
Console.WriteLine($"Number of Levels: {stats.NumLevels}");
Console.WriteLine($"Memtable Size: {stats.MemtableSize} bytes");
Console.WriteLine($"Total Keys: {stats.TotalKeys}");
Console.WriteLine($"Total Data Size: {stats.TotalDataSize} bytes");
Console.WriteLine($"Avg Key Size: {stats.AvgKeySize:F1} bytes");
Console.WriteLine($"Avg Value Size: {stats.AvgValueSize:F1} bytes");
Console.WriteLine($"Read Amplification: {stats.ReadAmp:F2}");
Console.WriteLine($"Cache Hit Rate: {stats.HitRate * 100:F1}%");
for (int i = 0; i < stats.NumLevels; i++)
{
Console.WriteLine($"Level {i + 1}: {stats.LevelNumSstables[i]} SSTables, {stats.LevelSizes[i]} bytes, {stats.LevelKeyCounts[i]} keys");
}
if (stats.UseBtree)
{
Console.WriteLine($"B+tree Total Nodes: {stats.BtreeTotalNodes}");
Console.WriteLine($"B+tree Max Height: {stats.BtreeMaxHeight}");
Console.WriteLine($"B+tree Avg Height: {stats.BtreeAvgHeight:F2}");
}

Listing Column Families

var cfList = db.ListColumnFamilies();
Console.WriteLine("Available column families:");
foreach (var name in cfList)
{
Console.WriteLine($" - {name}");
}

Updating Column Family Configuration

Update runtime-safe configuration settings for a column family. Changes apply to new operations only.

var cf = db.GetColumnFamily("my_cf")!;
cf.UpdateRuntimeConfig(new ColumnFamilyConfig
{
WriteBufferSize = 256 * 1024 * 1024,
SkipListMaxLevel = 16,
SkipListProbability = 0.25f,
BloomFpr = 0.001,
IndexSampleRatio = 8,
});

Without persisting to disk

cf.UpdateRuntimeConfig(new ColumnFamilyConfig
{
WriteBufferSize = 256 * 1024 * 1024,
}, persistToDisk: false);

Updatable settings

  • WriteBufferSize · Memtable flush threshold
  • SkipListMaxLevel · Skip list level for new memtables
  • SkipListProbability · Skip list probability for new memtables
  • BloomFpr · False positive rate for new SSTables
  • EnableBloomFilter · Enable/disable bloom filters for new SSTables
  • EnableBlockIndexes · Enable/disable block indexes for new SSTables
  • BlockIndexPrefixLen · Block index prefix length for new SSTables
  • IndexSampleRatio · Index sampling ratio for new SSTables
  • CompressionAlgorithm · Compression for new SSTables (existing SSTables retain their original compression)
  • KlogValueThreshold · Value log threshold for new writes
  • SyncMode · Durability mode. Also updates the active WAL’s sync mode immediately
  • SyncIntervalUs · Sync interval in microseconds (only used when SyncMode is Interval)
  • LevelSizeRatio · LSM level sizing
  • MinLevels · Minimum LSM levels
  • DividingLevelOffset · Compaction dividing level offset
  • L1FileCountTrigger · L1 file count compaction trigger
  • L0QueueStallThreshold · Backpressure stall threshold
  • DefaultIsolationLevel · Default transaction isolation level
  • MinDiskSpace · Minimum disk space required

Non-updatable settings

  • ComparatorName · Cannot change sort order after creation
  • UseBtree · Cannot change klog format after creation

Compaction

Manual Compaction

var cf = db.GetColumnFamily("my_cf")!;
cf.Compact();

Manual Memtable Flush

var cf = db.GetColumnFamily("my_cf")!;
cf.FlushMemtable();

Sync Modes

Control the durability vs performance tradeoff.

using TidesDB;
db.CreateColumnFamily("fast_cf", new ColumnFamilyConfig
{
SyncMode = SyncMode.None,
});
db.CreateColumnFamily("balanced_cf", new ColumnFamilyConfig
{
SyncMode = SyncMode.Interval,
SyncIntervalUs = 128000, // Sync every 128ms
});
db.CreateColumnFamily("durable_cf", new ColumnFamilyConfig
{
SyncMode = SyncMode.Full,
});

Compression Algorithms

TidesDB supports multiple compression algorithms:

using TidesDB;
db.CreateColumnFamily("no_compress", new ColumnFamilyConfig
{
CompressionAlgorithm = CompressionAlgorithm.None,
});
db.CreateColumnFamily("lz4_cf", new ColumnFamilyConfig
{
CompressionAlgorithm = CompressionAlgorithm.Lz4,
});
db.CreateColumnFamily("lz4_fast_cf", new ColumnFamilyConfig
{
CompressionAlgorithm = CompressionAlgorithm.Lz4Fast,
});
db.CreateColumnFamily("zstd_cf", new ColumnFamilyConfig
{
CompressionAlgorithm = CompressionAlgorithm.Zstd,
});
db.CreateColumnFamily("snappy_cf", new ColumnFamilyConfig
{
CompressionAlgorithm = CompressionAlgorithm.Snappy, // Not available on SunOS/Illumos/OmniOS
});

B+tree KLog Format (Optional)

Column families can optionally use a B+tree structure for the key log instead of the default block-based format. The B+tree klog format offers faster point lookups through O(log N) tree traversal.

using TidesDB;
db.CreateColumnFamily("btree_cf", new ColumnFamilyConfig
{
UseBtree = true,
CompressionAlgorithm = CompressionAlgorithm.Lz4,
EnableBloomFilter = true,
});

When to use B+tree klog format

  • Read-heavy workloads with frequent point lookups
  • Workloads where read latency is more important than write throughput
  • Large SSTables where block scanning becomes expensive

Tradeoffs

  • Slightly higher write amplification during flush
  • Larger metadata overhead per node
  • Block-based format may be faster for sequential scans

Error Handling

using TidesDB;
var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
try
{
txn.Put(cf, Encoding.UTF8.GetBytes("key"), Encoding.UTF8.GetBytes("value"), -1);
txn.Commit();
}
catch (TidesDBException ex)
{
Console.WriteLine($"Error code: {ex.ErrorCode}");
Console.WriteLine($"Error message: {ex.Message}");
txn.Rollback();
}

Error Codes

  • TDB_SUCCESS (0) · Operation successful
  • TDB_ERR_MEMORY (-1) · Memory allocation failed
  • TDB_ERR_INVALID_ARGS (-2) · Invalid arguments
  • TDB_ERR_NOT_FOUND (-3) · Key not found
  • TDB_ERR_IO (-4) · I/O error
  • TDB_ERR_CORRUPTION (-5) · Data corruption
  • TDB_ERR_EXISTS (-6) · Resource already exists
  • TDB_ERR_CONFLICT (-7) · Transaction conflict
  • TDB_ERR_TOO_LARGE (-8) · Key or value too large
  • TDB_ERR_MEMORY_LIMIT (-9) · Memory limit exceeded
  • TDB_ERR_INVALID_DB (-10) · Invalid database handle
  • TDB_ERR_UNKNOWN (-11) · Unknown error
  • TDB_ERR_LOCKED (-12) · Database is locked

Complete Example

using System.Text;
using TidesDB;
var config = new Config
{
DbPath = "./example_db",
NumFlushThreads = 1,
NumCompactionThreads = 1,
LogLevel = LogLevel.Info,
BlockCacheSize = 64 * 1024 * 1024,
MaxOpenSstables = 256,
MaxMemoryUsage = 0,
LogToFile = false,
LogTruncationAt = 0
};
using var db = TidesDb.Open(config);
try
{
db.CreateColumnFamily("users", new ColumnFamilyConfig
{
WriteBufferSize = 64 * 1024 * 1024,
CompressionAlgorithm = CompressionAlgorithm.Lz4,
EnableBloomFilter = true,
BloomFpr = 0.01,
SyncMode = SyncMode.Interval,
SyncIntervalUs = 128000,
});
var cf = db.GetColumnFamily("users")!;
using (var txn = db.BeginTransaction())
{
txn.Put(cf, Encoding.UTF8.GetBytes("user:1"), Encoding.UTF8.GetBytes("Alice"), -1);
txn.Put(cf, Encoding.UTF8.GetBytes("user:2"), Encoding.UTF8.GetBytes("Bob"), -1);
// Temporary session with 30 second TTL
var ttl = DateTimeOffset.UtcNow.ToUnixTimeSeconds() + 30;
txn.Put(cf, Encoding.UTF8.GetBytes("session:abc"), Encoding.UTF8.GetBytes("temp_data"), ttl);
txn.Commit();
}
using (var txn = db.BeginTransaction())
{
var value = txn.Get(cf, Encoding.UTF8.GetBytes("user:1"));
Console.WriteLine($"user:1 = {Encoding.UTF8.GetString(value!)}");
// Iterate over all entries
using var iter = txn.NewIterator(cf);
Console.WriteLine("\nAll entries:");
iter.SeekToFirst();
while (iter.Valid())
{
var key = Encoding.UTF8.GetString(iter.Key());
var val = Encoding.UTF8.GetString(iter.Value());
Console.WriteLine($" {key} = {val}");
iter.Next();
}
}
var stats = cf.GetStats();
Console.WriteLine("\nColumn Family Statistics:");
Console.WriteLine($" Number of Levels: {stats.NumLevels}");
Console.WriteLine($" Memtable Size: {stats.MemtableSize} bytes");
db.DropColumnFamily("users");
}
catch (TidesDBException ex)
{
Console.WriteLine($"TidesDB error: {ex.Message} (code: {ex.ErrorCode})");
}

Isolation Levels

TidesDB supports five MVCC isolation levels:

using TidesDB;
using var txn = db.BeginTransaction(IsolationLevel.ReadCommitted);
txn.Commit();

Available Isolation Levels

  • IsolationLevel.ReadUncommitted · Sees all data including uncommitted changes
  • IsolationLevel.ReadCommitted · Sees only committed data (default)
  • IsolationLevel.RepeatableRead · Consistent snapshot, phantom reads possible
  • IsolationLevel.Snapshot · Write-write conflict detection
  • IsolationLevel.Serializable · Full read-write conflict detection (SSI)

Savepoints

Savepoints allow partial rollback within a transaction. You can create named savepoints, rollback to them, or release them without rolling back.

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
txn.Put(cf, Encoding.UTF8.GetBytes("key1"), Encoding.UTF8.GetBytes("value1"), -1);
txn.Savepoint("sp1");
txn.Put(cf, Encoding.UTF8.GetBytes("key2"), Encoding.UTF8.GetBytes("value2"), -1);
txn.RollbackToSavepoint("sp1");
txn.Commit();

Releasing a savepoint (frees resources without rolling back)

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
txn.Put(cf, Encoding.UTF8.GetBytes("key1"), Encoding.UTF8.GetBytes("value1"), -1);
txn.Savepoint("sp1");
txn.Put(cf, Encoding.UTF8.GetBytes("key2"), Encoding.UTF8.GetBytes("value2"), -1);
txn.ReleaseSavepoint("sp1");
txn.Commit();

Savepoint behavior

  • Savepoint(name) · Create a savepoint
  • RollbackToSavepoint(name) · Rollback to savepoint, discarding all operations after it
  • ReleaseSavepoint(name) · Release savepoint without rolling back
  • Multiple savepoints can be created with different names
  • Creating a savepoint with an existing name updates that savepoint
  • Savepoints are automatically freed when the transaction commits or rolls back

Transaction Reset

Reset a committed or aborted transaction for reuse with a new isolation level. This avoids the overhead of freeing and reallocating transaction resources in hot loops.

var cf = db.GetColumnFamily("my_cf")!;
using var txn = db.BeginTransaction();
txn.Put(cf, Encoding.UTF8.GetBytes("key1"), Encoding.UTF8.GetBytes("value1"), -1);
txn.Commit();
txn.Reset(IsolationLevel.ReadCommitted);
txn.Put(cf, Encoding.UTF8.GetBytes("key2"), Encoding.UTF8.GetBytes("value2"), -1);
txn.Commit();

When to use

  • Batch processing · Reuse a single transaction across many commit cycles in a loop
  • Connection pooling · Reset a transaction for a new request without reallocation
  • High-throughput ingestion · Reduce allocation overhead in tight write loops

Backup

Create an on-disk snapshot of an open database without blocking normal reads/writes.

using TidesDB;
using var db = TidesDb.Open(config);
db.Backup("./mydb_backup");

Behavior

  • Requires the directory to be non-existent or empty
  • Does not copy the LOCK file, so the backup can be opened normally
  • Database stays open and usable during backup

Checkpoint

Create a lightweight, near-instant snapshot of an open database using hard links instead of copying SSTable data.

using TidesDB;
using var db = TidesDb.Open(config);
db.Checkpoint("./mydb_checkpoint");

Behavior

  • Requires the directory to be non-existent or empty
  • Uses hard links for SSTable files (near-instant, O(1) per file)
  • Falls back to file copy if hard linking fails (e.g., cross-filesystem)
  • Flushes memtables and halts compactions to ensure a consistent snapshot
  • Database stays open and usable during checkpoint

Checkpoint vs Backup

BackupCheckpoint
SpeedCopies every SSTable byte-by-byteNear-instant (hard links, O(1) per file)
Disk usageFull independent copyNo extra disk until compaction removes old SSTables
PortabilityCan be moved to another filesystem or machineSame filesystem only (hard link requirement)
Use caseArchival, disaster recovery, remote shippingFast local snapshots, point-in-time reads, streaming backups

Notes

  • The checkpoint can be opened as a normal TidesDB database with TidesDb.Open
  • Hard-linked files share storage with the live database; deleting the original does not affect the checkpoint

Checking Flush/Compaction Status

Check if a column family currently has flush or compaction operations in progress.

var cf = db.GetColumnFamily("my_cf")!;
if (cf.IsFlushing())
{
Console.WriteLine("Flush in progress");
}
if (cf.IsCompacting())
{
Console.WriteLine("Compaction in progress");
}

Cache Statistics

var cacheStats = db.GetCacheStats();
Console.WriteLine($"Cache enabled: {cacheStats.Enabled}");
Console.WriteLine($"Total entries: {cacheStats.TotalEntries}");
Console.WriteLine($"Total bytes: {cacheStats.TotalBytes / (1024.0 * 1024.0):F2} MB");
Console.WriteLine($"Hits: {cacheStats.Hits}");
Console.WriteLine($"Misses: {cacheStats.Misses}");
Console.WriteLine($"Hit rate: {cacheStats.HitRate * 100:F1}%");
Console.WriteLine($"Partitions: {cacheStats.NumPartitions}");

Range Cost Estimation

RangeCost estimates the computational cost of iterating between two keys in a column family. The returned value is an opaque double — meaningful only for comparison with other values from the same method. It uses only in-memory metadata and performs no disk I/O.

var cf = db.GetColumnFamily("my_cf")!;
var costA = cf.RangeCost(
Encoding.UTF8.GetBytes("user:0000"),
Encoding.UTF8.GetBytes("user:0999"));
var costB = cf.RangeCost(
Encoding.UTF8.GetBytes("user:1000"),
Encoding.UTF8.GetBytes("user:1099"));
if (costA < costB)
{
Console.WriteLine("Range A is cheaper to iterate");
}

How it works

The function walks all SSTable levels and uses in-memory metadata to estimate how many blocks and entries fall within the given key range:

  • With block indexes enabled · Uses O(log B) binary search per overlapping SSTable to find the block slots containing each key bound
  • Without block indexes · Falls back to byte-level key interpolation against the SSTable’s min/max key range
  • B+tree SSTables (UseBtree=true) · Uses key interpolation against tree node counts, plus tree height as a seek cost
  • Compression · Compressed SSTables receive a 1.5× weight multiplier to account for decompression overhead
  • Merge overhead · Each overlapping SSTable adds a small fixed cost for merge-heap operations
  • Memtable · The active memtable’s entry count contributes a small in-memory cost

Key order does not matter — the method normalizes the range so keyA > keyB produces the same result as keyB > keyA.

Use cases

  • Query planning · Compare candidate key ranges to find the cheapest one to scan
  • Load balancing · Distribute range scan work across threads by estimating per-range cost
  • Adaptive prefetching · Decide how aggressively to prefetch based on range size
  • Monitoring · Track how data distribution changes across key ranges over time

Custom Comparators

TidesDB provides six built-in comparators that are automatically registered on database open:

  • "memcmp" (default) · Binary byte-by-byte comparison
  • "lexicographic" · Null-terminated string comparison
  • "uint64" · Unsigned 64-bit integer comparison
  • "int64" · Signed 64-bit integer comparison
  • "reverse" · Reverse binary comparison
  • "case_insensitive" · Case-insensitive ASCII comparison

Registering a Comparator

db.RegisterComparator("my_comparator");

Getting a Registered Comparator

Check if a comparator is registered by name:

if (db.GetComparator("memcmp"))
{
Console.WriteLine("memcmp comparator is registered");
}
if (!db.GetComparator("nonexistent"))
{
Console.WriteLine("comparator not registered");
}

Use cases

  • Validation · Check if a comparator is registered before creating a column family
  • Debugging · Verify comparator registration during development
  • Dynamic configuration · Query available comparators at runtime

Commit Hook (Change Data Capture)

SetCommitHook registers a callback that fires synchronously after every transaction commit on a column family. The hook receives the full batch of committed operations atomically, enabling real-time change data capture without WAL parsing or external log consumers.

var cf = db.GetColumnFamily("my_cf")!;
cf.SetCommitHook((ops, commitSeq) =>
{
foreach (var op in ops)
{
if (op.IsDelete)
{
Console.WriteLine($"[{commitSeq}] DELETE {Encoding.UTF8.GetString(op.Key)}");
}
else
{
Console.WriteLine($"[{commitSeq}] PUT {Encoding.UTF8.GetString(op.Key)} = {Encoding.UTF8.GetString(op.Value!)}");
}
}
});
// Normal writes now trigger the hook automatically
using (var txn = db.BeginTransaction())
{
txn.Put(cf, Encoding.UTF8.GetBytes("key1"), Encoding.UTF8.GetBytes("value1"), -1);
txn.Commit(); // hook fires here
}
// Detach hook
cf.ClearCommitHook();

Operation fields

PropertyTypeDescription
Keybyte[]The key data
Valuebyte[]?The value data (null for deletes)
TtllongTime-to-live (0 = no expiry)
IsDeleteboolTrue if this is a delete operation

Behavior

  • The hook fires after WAL write, memtable apply, and commit status marking are complete — the data is fully durable before the callback runs
  • Hook exceptions are caught internally and do not affect the commit result
  • Each column family has its own independent hook; a multi-CF transaction fires the hook once per CF with only that CF’s operations
  • commitSeq is monotonically increasing across commits and can be used as a replication cursor
  • Data in CommitOp is copied from native memory — safe to retain after the callback returns
  • The hook executes synchronously on the committing thread; keep the callback fast to avoid stalling writers
  • Calling ClearCommitHook() disables the hook immediately with no restart required

Use cases

  • Replication · Ship committed batches to replicas in commit order
  • Event streaming · Publish mutations to Kafka, NATS, or any message broker
  • Secondary indexing · Maintain a reverse index or materialized view
  • Audit logging · Record every mutation with key, value, TTL, and sequence number
  • Debugging · Attach a temporary hook in production to inspect live writes

Testing

Terminal window
cd tidesdb-cs
dotnet build
dotnet test

C# Types

The package exports all necessary types for full C# support:

using TidesDB;
// Main classes
// -- TidesDb
// -- ColumnFamily
// -- Transaction
// -- Iterator
// -- TidesDBException
// Configuration classes
// -- Config
// -- ColumnFamilyConfig
// -- Stats
// -- CacheStats
// Change data capture
// -- CommitOp
// -- CommitHookHandler
// Methods
// -- TidesDb.GetComparator(string name)
// -- ColumnFamily.UpdateRuntimeConfig(ColumnFamilyConfig config, bool persistToDisk)
// Enums
// -- CompressionAlgorithm
// -- SyncMode
// -- LogLevel
// -- IsolationLevel

Configuration Reference

Database Configuration (Config)

PropertyTypeDefaultDescription
DbPathstringrequiredPath to the database directory
NumFlushThreadsint2Number of flush threads
NumCompactionThreadsint2Number of compaction threads
LogLevelLogLevelInfoLogging level
BlockCacheSizeulong64MBBlock cache size in bytes
MaxOpenSstablesulong256Maximum number of open SSTables
MaxMemoryUsageulong0Global memory limit in bytes (0 = auto, 50% of system RAM; minimum: 5% of system RAM)
LogToFileboolfalseWrite debug logging to a file
LogTruncationAtulong0Log file truncation threshold (0 = no truncation)

Column Family Configuration (ColumnFamilyConfig)

PropertyTypeDefaultDescription
WriteBufferSizeulong64MBMemtable flush threshold
LevelSizeRatioulong10Level size multiplier
MinLevelsint5Minimum LSM levels
DividingLevelOffsetint2Compaction dividing level offset
KlogValueThresholdulong512Values larger than this go to vlog
CompressionAlgorithmCompressionAlgorithmLz4Compression algorithm
EnableBloomFilterbooltrueEnable bloom filters
BloomFprdouble0.01Bloom filter false positive rate
EnableBlockIndexesbooltrueEnable block indexes
IndexSampleRatioint1Index sample ratio
BlockIndexPrefixLenint16Block index prefix length
SyncModeSyncModeFullSync mode for durability
SyncIntervalUsulong1000000Sync interval in microseconds
ComparatorNamestring""Comparator name (empty for default)
SkipListMaxLevelint12Skip list max level
SkipListProbabilityfloat0.25Skip list probability
DefaultIsolationLevelIsolationLevelReadCommittedDefault transaction isolation
MinDiskSpaceulong100MBMinimum disk space required
L1FileCountTriggerint4L1 file count trigger for compaction
L0QueueStallThresholdint20L0 queue stall threshold
UseBtreeboolfalseUse B+tree format for klog

Column Family Statistics (Stats)

PropertyTypeDescription
NumLevelsintNumber of LSM levels
MemtableSizeulongCurrent memtable size in bytes
LevelSizesulong[]Array of per-level total sizes
LevelNumSstablesint[]Array of per-level SSTable counts
LevelKeyCountsulong[]Array of per-level key counts
ConfigColumnFamilyConfig?Full column family configuration
TotalKeysulongTotal keys across memtable and all SSTables
TotalDataSizeulongTotal data size (klog + vlog) in bytes
AvgKeySizedoubleEstimated average key size in bytes
AvgValueSizedoubleEstimated average value size in bytes
ReadAmpdoubleRead amplification factor (point lookup cost)
HitRatedoubleBlock cache hit rate (0.0 to 1.0)
UseBtreeboolWhether column family uses B+tree klog format
BtreeTotalNodesulongTotal B+tree nodes (only if UseBtree=true)
BtreeMaxHeightuintMaximum tree height (only if UseBtree=true)
BtreeAvgHeightdoubleAverage tree height (only if UseBtree=true)