TidesDB GO API Reference
If you want to download the source of this document, you can find it here.
Getting Started
Prerequisites
You must have the TidesDB shared C library installed on your system. You can find the installation instructions here.
Installation
go get github.com/tidesdb/tidesdb-goCustom Installation Paths
If you installed TidesDB to a non-standard location, you can specify custom paths using CGO environment variables:
# Set custom include and library pathsexport CGO_CFLAGS="-I/custom/path/include"export CGO_LDFLAGS="-L/custom/path/lib -ltidesdb"
# Then install/buildgo get github.com/tidesdb/tidesdb-goCustom prefix installation
# Install TidesDB to custom locationcd tidesdbcmake -S . -B build -DCMAKE_INSTALL_PREFIX=/opt/tidesdbcmake --build buildsudo cmake --install build
# Configure Go to use custom locationexport CGO_CFLAGS="-I/opt/tidesdb/include"export CGO_LDFLAGS="-L/opt/tidesdb/lib -ltidesdb"export LD_LIBRARY_PATH="/opt/tidesdb/lib:$LD_LIBRARY_PATH" # Linux# orexport DYLD_LIBRARY_PATH="/opt/tidesdb/lib:$DYLD_LIBRARY_PATH" # macOS
go get github.com/tidesdb/tidesdb-goUsage
Opening and Closing a Database
package main
import ( "fmt" "log"
tidesdb "github.com/tidesdb/tidesdb-go")
func main() { config := tidesdb.Config{ DBPath: "./mydb", NumFlushThreads: 2, NumCompactionThreads: 2, LogLevel: tidesdb.LogInfo, BlockCacheSize: 64 * 1024 * 1024, MaxOpenSSTables: 256, LogToFile: false, // Write logs to file instead of stderr LogTruncationAt: 24 * 1024 * 1024, // Log file truncation size (24MB default) }
db, err := tidesdb.Open(config) if err != nil { log.Fatal(err) } defer db.Close()
fmt.Println("Database opened successfully")}Backup
Create an on-disk snapshot of an open database without blocking normal reads/writes.
err := db.Backup("./mydb_backup")if err != nil { log.Fatal(err)}Behavior
- Requires the backup directory to be non-existent or empty
- Does not copy the LOCK file, so the backup can be opened normally
- Database stays open and usable during backup
- The backup represents the database state after the final flush/compaction drain
Checkpoint
Create a lightweight, near-instant snapshot of an open database using hard links instead of copying SSTable data.
err := db.Checkpoint("./mydb_checkpoint")if err != nil { log.Fatal(err)}Behavior
- Requires the checkpoint directory to be non-existent or empty
- For each column family:
- Flushes the active memtable so all data is in SSTables
- Halts compactions to ensure a consistent view of live SSTable files
- Hard links all SSTable files (
.klogand.vlog) into the checkpoint directory - Copies small metadata files (manifest, config) into the checkpoint directory
- Resumes compactions
- Falls back to file copy if hard linking fails (e.g., cross-filesystem)
- Database stays open and usable during checkpoint
- The checkpoint can be opened as a normal TidesDB database with
Open
Checkpoint vs Backup
Backup | Checkpoint | |
|---|---|---|
| Speed | Copies every SSTable byte-by-byte | Near-instant (hard links, O(1) per file) |
| Disk usage | Full independent copy | No extra disk until compaction removes old SSTables |
| Portability | Can be moved to another filesystem or machine | Same filesystem only (hard link requirement) |
| Use case | Archival, disaster recovery, remote shipping | Fast local snapshots, point-in-time reads, streaming backups |
Notes
- The checkpoint represents the database state at the point all memtables are flushed and compactions are halted
- Hard-linked files share storage with the live database. Deleting the original database does not affect the checkpoint (hard link semantics)
Creating and Dropping Column Families
Column families are isolated key-value stores with independent configuration.
cfConfig := tidesdb.DefaultColumnFamilyConfig()err := db.CreateColumnFamily("my_cf", cfConfig)if err != nil { log.Fatal(err)}
// Create with custom configuration based on defaultscfConfig := tidesdb.DefaultColumnFamilyConfig()
// You can modify the configuration as neededcfConfig.WriteBufferSize = 128 * 1024 * 1024cfConfig.LevelSizeRatio = 10cfConfig.MinLevels = 5cfConfig.CompressionAlgorithm = tidesdb.LZ4CompressioncfConfig.EnableBloomFilter = truecfConfig.BloomFPR = 0.01cfConfig.EnableBlockIndexes = truecfConfig.SyncMode = tidesdb.SyncIntervalcfConfig.SyncIntervalUs = 128000cfConfig.DefaultIsolationLevel = tidesdb.IsolationReadCommittedcfConfig.DividingLevelOffset = 2 // Compaction dividing level offsetcfConfig.KlogValueThreshold = 512 // Values > 512 bytes go to vlogcfConfig.BlockIndexPrefixLen = 16 // Block index prefix lengthcfConfig.MinDiskSpace = 100 * 1024 * 1024 // Minimum disk space required (100MB)cfConfig.L1FileCountTrigger = 4 // L1 file count trigger for compactioncfConfig.L0QueueStallThreshold = 20 // L0 queue stall thresholdcfConfig.UseBtree = 0 // Use B+tree format for klog (0 = block-based)
err = db.CreateColumnFamily("my_cf", cfConfig)if err != nil { log.Fatal(err)}
err = db.DropColumnFamily("my_cf")if err != nil { log.Fatal(err)}Renaming a Column Family
Atomically rename a column family and its underlying directory. The operation waits for any in-progress flush or compaction to complete before renaming.
err := db.RenameColumnFamily("old_name", "new_name")if err != nil { log.Fatal(err)}Cloning a Column Family
Create a complete copy of an existing column family with a new name. The clone contains all the data from the source at the time of cloning.
err := db.CloneColumnFamily("source_cf", "cloned_cf")if err != nil { log.Fatal(err)}
// Both column families now exist independentlysourceCF, _ := db.GetColumnFamily("source_cf")clonedCF, _ := db.GetColumnFamily("cloned_cf")Behavior
- Flushes the source column family’s memtable to ensure all data is on disk
- Waits for any in-progress flush or compaction to complete
- Copies all SSTable files to the new directory
- The clone is completely independent; modifications to one do not affect the other
Use cases
- Testing · Create a copy of production data for testing without affecting the original
- Branching · Create a snapshot of data before making experimental changes
- Migration · Clone data before schema or configuration changes
- Backup verification · Clone and verify data integrity without modifying the source
Return values
nil· Clone completed successfully- Error with
ErrNotFound· Source column family doesn’t exist - Error with
ErrExists· Destination column family already exists - Error with
ErrInvalidArgs· Invalid arguments (nil pointers or same source/destination name) - Error with
ErrIO· Failed to copy files or create directory
CRUD Operations
All operations in TidesDB are performed through transactions for ACID guarantees.
Writing Data
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
// Put a key-value pair (TTL -1 means no expiration)err = txn.Put(cf, []byte("key"), []byte("value"), -1)if err != nil { log.Fatal(err)}
err = txn.Commit()if err != nil { log.Fatal(err)}Writing with TTL
import "time"
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
// Set expiration time (Unix timestamp)ttl := time.Now().Add(10 * time.Second).Unix()
err = txn.Put(cf, []byte("temp_key"), []byte("temp_value"), ttl)if err != nil { log.Fatal(err)}
err = txn.Commit()if err != nil { log.Fatal(err)}TTL Examples
ttl := int64(-1)
// Expire in 5 minutesttl := time.Now().Add(5 * time.Minute).Unix()
// Expire in 1 hourttl := time.Now().Add(1 * time.Hour).Unix()
ttl := time.Date(2026, 12, 31, 23, 59, 59, 0, time.UTC).Unix()Reading Data
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
value, err := txn.Get(cf, []byte("key"))if err != nil { log.Fatal(err)}
fmt.Printf("Value: %s\n", value)Deleting Data
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
err = txn.Delete(cf, []byte("key"))if err != nil { log.Fatal(err)}
err = txn.Commit()if err != nil { log.Fatal(err)}Multi-Operation Transactions
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
// Multiple operations in one transaction, across column families as wellerr = txn.Put(cf, []byte("key1"), []byte("value1"), -1)if err != nil { txn.Rollback() log.Fatal(err)}
err = txn.Put(cf, []byte("key2"), []byte("value2"), -1)if err != nil { txn.Rollback() log.Fatal(err)}
err = txn.Delete(cf, []byte("old_key"))if err != nil { txn.Rollback() log.Fatal(err)}
// Commit atomically -- all or nothingerr = txn.Commit()if err != nil { log.Fatal(err)}Iterating Over Data
Iterators provide efficient bidirectional traversal over key-value pairs.
Forward Iteration
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
iter, err := txn.NewIterator(cf)if err != nil { log.Fatal(err)}defer iter.Free()
iter.SeekToFirst()
for iter.Valid() { key, err := iter.Key() if err != nil { log.Fatal(err) }
value, err := iter.Value() if err != nil { log.Fatal(err) }
fmt.Printf("Key: %s, Value: %s\n", key, value)
iter.Next()}Backward Iteration
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
iter, err := txn.NewIterator(cf)if err != nil { log.Fatal(err)}defer iter.Free()
iter.SeekToLast()
for iter.Valid() { key, err := iter.Key() if err != nil { log.Fatal(err) }
value, err := iter.Value() if err != nil { log.Fatal(err) }
fmt.Printf("Key: %s, Value: %s\n", key, value)
iter.Prev()}Seek Operations
Seek to a specific key or key range without scanning from the beginning.
iter, err := txn.NewIterator(cf)if err != nil { log.Fatal(err)}defer iter.Free()
// Seek to first key >= targeterr = iter.Seek([]byte("user:1000"))if err != nil { log.Fatal(err)}
if iter.Valid() { key, _ := iter.Key() fmt.Printf("Found: %s\n", key)}
// Seek to last key <= target (for reverse iteration)err = iter.SeekForPrev([]byte("user:2000"))if err != nil { log.Fatal(err)}
for iter.Valid() { key, _ := iter.Key() fmt.Printf("Reverse: %s\n", key) iter.Prev()}Prefix Seeking
iter, err := txn.NewIterator(cf)if err != nil { log.Fatal(err)}defer iter.Free()
prefix := []byte("user:")err = iter.Seek(prefix)if err != nil { log.Fatal(err)}
for iter.Valid() { key, _ := iter.Key()
// Stop when keys no longer match prefix if !bytes.HasPrefix(key, prefix) { break }
value, _ := iter.Value() fmt.Printf("%s = %s\n", key, value)
iter.Next()}Getting Column Family Statistics
Retrieve detailed statistics about a column family.
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
stats, err := cf.GetStats()if err != nil { log.Fatal(err)}
fmt.Printf("Number of Levels: %d\n", stats.NumLevels)fmt.Printf("Memtable Size: %d bytes\n", stats.MemtableSize)fmt.Printf("Total Keys: %d\n", stats.TotalKeys)fmt.Printf("Total Data Size: %d bytes\n", stats.TotalDataSize)fmt.Printf("Avg Key Size: %.2f bytes\n", stats.AvgKeySize)fmt.Printf("Avg Value Size: %.2f bytes\n", stats.AvgValueSize)fmt.Printf("Read Amplification: %.2f\n", stats.ReadAmp)fmt.Printf("Hit Rate: %.2f%%\n", stats.HitRate * 100)
// B+tree statistics (only populated if UseBtree=1)if stats.UseBtree { fmt.Printf("B+tree Total Nodes: %d\n", stats.BtreeTotalNodes) fmt.Printf("B+tree Max Height: %d\n", stats.BtreeMaxHeight) fmt.Printf("B+tree Avg Height: %.2f\n", stats.BtreeAvgHeight)}
// Per-level statisticsfor i := 0; i < stats.NumLevels; i++ { fmt.Printf("Level %d: %d SSTables, %d bytes, %d keys\n", i+1, stats.LevelNumSSTables[i], stats.LevelSizes[i], stats.LevelKeyCounts[i])}
if stats.Config != nil { fmt.Printf("Write Buffer Size: %d\n", stats.Config.WriteBufferSize) fmt.Printf("Compression: %d\n", stats.Config.CompressionAlgorithm) fmt.Printf("Bloom Filter: %v\n", stats.Config.EnableBloomFilter) fmt.Printf("Sync Mode: %d\n", stats.Config.SyncMode) fmt.Printf("Use B+tree: %d\n", stats.Config.UseBtree)}Stats Fields
| Field | Type | Description |
|---|---|---|
NumLevels | int | Number of LSM levels |
MemtableSize | uint64 | Current memtable size in bytes |
LevelSizes | []uint64 | Array of per-level total sizes |
LevelNumSSTables | []int | Array of per-level SSTable counts |
LevelKeyCounts | []uint64 | Array of per-level key counts |
Config | *ColumnFamilyConfig | Full column family configuration |
TotalKeys | uint64 | Total keys across memtable and all SSTables |
TotalDataSize | uint64 | Total data size (klog + vlog) in bytes |
AvgKeySize | float64 | Estimated average key size in bytes |
AvgValueSize | float64 | Estimated average value size in bytes |
ReadAmp | float64 | Read amplification factor |
HitRate | float64 | Block cache hit rate (0.0 to 1.0) |
UseBtree | bool | Whether column family uses B+tree klog format |
BtreeTotalNodes | uint64 | Total B+tree nodes across all SSTables |
BtreeMaxHeight | uint32 | Maximum tree height across all SSTables |
BtreeAvgHeight | float64 | Average tree height across all SSTables |
Getting Block Cache Statistics
Get statistics for the global block cache (shared across all column families).
cacheStats, err := db.GetCacheStats()if err != nil { log.Fatal(err)}
if cacheStats.Enabled { fmt.Printf("Cache enabled: yes\n") fmt.Printf("Total entries: %d\n", cacheStats.TotalEntries) fmt.Printf("Total bytes: %.2f MB\n", float64(cacheStats.TotalBytes) / (1024.0 * 1024.0)) fmt.Printf("Hits: %d\n", cacheStats.Hits) fmt.Printf("Misses: %d\n", cacheStats.Misses) fmt.Printf("Hit rate: %.1f%%\n", cacheStats.HitRate * 100.0) fmt.Printf("Partitions: %d\n", cacheStats.NumPartitions)} else { fmt.Printf("Cache enabled: no (BlockCacheSize = 0)\n")}CacheStats Fields
| Field | Type | Description |
|---|---|---|
Enabled | bool | Whether block cache is active |
TotalEntries | uint64 | Number of cached blocks |
TotalBytes | uint64 | Total memory used by cached blocks |
Hits | uint64 | Number of cache hits |
Misses | uint64 | Number of cache misses |
HitRate | float64 | Hit rate as a decimal (0.0 to 1.0) |
NumPartitions | uint64 | Number of cache partitions |
Listing Column Families
cfList, err := db.ListColumnFamilies()if err != nil { log.Fatal(err)}
fmt.Println("Available column families:")for _, name := range cfList { fmt.Printf(" - %s\n", name)}Compaction
Manual Compaction
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
// Manually trigger compaction (queues compaction from L1+)err = cf.Compact()if err != nil { log.Printf("Compaction note: %v", err)}Manual Memtable Flush
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
// Manually trigger memtable flush (queues memtable for sorted run to disk (L1))err = cf.FlushMemtable()if err != nil { log.Printf("Flush note: %v", err)}Checking Flush/Compaction Status
Check if a column family currently has flush or compaction operations in progress.
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
// Check if flushing is in progressif cf.IsFlushing() { fmt.Println("Flush in progress")}
// Check if compaction is in progressif cf.IsCompacting() { fmt.Println("Compaction in progress")}Updating Runtime Configuration
Update runtime-safe configuration settings for a column family. Changes apply to new operations only.
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
newConfig := tidesdb.DefaultColumnFamilyConfig()newConfig.WriteBufferSize = 256 * 1024 * 1024newConfig.SkipListMaxLevel = 16newConfig.BloomFPR = 0.001 // 0.1% false positive rate
err = cf.UpdateRuntimeConfig(newConfig, true)if err != nil { log.Fatal(err)}Updatable settings (safe to change at runtime):
WriteBufferSize· Memtable flush thresholdSkipListMaxLevel· Skip list level for new memtablesSkipListProbability· Skip list probability for new memtablesBloomFPR· False positive rate for new SSTablesIndexSampleRatio· Index sampling ratio for new SSTablesSyncMode· Durability modeSyncIntervalUs· Sync interval in microseconds
Sync Modes
Control the durability vs performance tradeoff.
cfConfig := tidesdb.DefaultColumnFamilyConfig()
// SyncNone -- Fastest, least durable (OS handles flushing on sorted runs and compaction to sync after completion)cfConfig.SyncMode = tidesdb.SyncNone
// SyncInterval -- Balanced (periodic background syncing)cfConfig.SyncMode = tidesdb.SyncIntervalcfConfig.SyncIntervalUs = 128000 // Sync every 128ms
// SyncFull -- Most durable (fsync on every write)cfConfig.SyncMode = tidesdb.SyncFull
err := db.CreateColumnFamily("my_cf", cfConfig)if err != nil { log.Fatal(err)}Compression Algorithms
TidesDB supports multiple compression algorithms:
cfConfig := tidesdb.DefaultColumnFamilyConfig()
cfConfig.CompressionAlgorithm = tidesdb.NoCompressioncfConfig.CompressionAlgorithm = tidesdb.SnappyCompressioncfConfig.CompressionAlgorithm = tidesdb.LZ4CompressioncfConfig.CompressionAlgorithm = tidesdb.LZ4FastCompressioncfConfig.CompressionAlgorithm = tidesdb.ZstdCompression
err := db.CreateColumnFamily("my_cf", cfConfig)if err != nil { log.Fatal(err)}Error Handling
cf, err := db.GetColumnFamily("my_cf")if err != nil { log.Fatal(err)}
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
err = txn.Put(cf, []byte("key"), []byte("value"), -1)if err != nil { // Errors include context and error codes fmt.Printf("Error: %v\n", err)
// Example error message: // "failed to put key-value pair: memory allocation failed (code: -1)"
txn.Rollback() return}
err = txn.Commit()if err != nil { log.Fatal(err)}Error Codes
ErrSuccess(0) · Operation successfulErrMemory(-1) · Memory allocation failedErrInvalidArgs(-2) · Invalid argumentsErrNotFound(-3) · Key not foundErrIO(-4) · I/O errorErrCorruption(-5) · Data corruptionErrExists(-6) · Resource already existsErrConflict(-7) · Transaction conflictErrTooLarge(-8) · Key or value too largeErrMemoryLimit(-9) · Memory limit exceededErrInvalidDB(-10) · Invalid database handleErrUnknown(-11) · Unknown errorErrLocked(-12) · Database is locked
Complete Example
package main
import ( "fmt" "log" "time"
tidesdb "github.com/tidesdb/tidesdb-go")
func main() { config := tidesdb.Config{ DBPath: "./example_db", NumFlushThreads: 1, NumCompactionThreads: 1, LogLevel: tidesdb.LogInfo, BlockCacheSize: 64 * 1024 * 1024, MaxOpenSSTables: 256, }
db, err := tidesdb.Open(config) if err != nil { log.Fatal(err) } defer db.Close()
cfConfig := tidesdb.DefaultColumnFamilyConfig() cfConfig.WriteBufferSize = 64 * 1024 * 1024 cfConfig.CompressionAlgorithm = tidesdb.LZ4Compression cfConfig.EnableBloomFilter = true cfConfig.BloomFPR = 0.01 cfConfig.SyncMode = tidesdb.SyncInterval cfConfig.SyncIntervalUs = 128000
err = db.CreateColumnFamily("users", cfConfig) if err != nil { log.Fatal(err) } defer db.DropColumnFamily("users")
cf, err := db.GetColumnFamily("users") if err != nil { log.Fatal(err) }
txn, err := db.BeginTxn() if err != nil { log.Fatal(err) }
err = txn.Put(cf, []byte("user:1"), []byte("Alice"), -1) if err != nil { txn.Rollback() log.Fatal(err) }
err = txn.Put(cf, []byte("user:2"), []byte("Bob"), -1) if err != nil { txn.Rollback() log.Fatal(err) }
ttl := time.Now().Add(30 * time.Second).Unix() err = txn.Put(cf, []byte("session:abc"), []byte("temp_data"), ttl) if err != nil { txn.Rollback() log.Fatal(err) }
err = txn.Commit() if err != nil { log.Fatal(err) } txn.Free()
readTxn, err := db.BeginTxn() if err != nil { log.Fatal(err) } defer readTxn.Free()
value, err := readTxn.Get(cf, []byte("user:1")) if err != nil { log.Fatal(err) } fmt.Printf("user:1 = %s\n", value)
iter, err := readTxn.NewIterator(cf) if err != nil { log.Fatal(err) } defer iter.Free()
fmt.Println("\nAll entries:") iter.SeekToFirst() for iter.Valid() { key, _ := iter.Key() value, _ := iter.Value() fmt.Printf(" %s = %s\n", key, value) iter.Next() }
stats, err := cf.GetStats() if err != nil { log.Fatal(err) }
fmt.Printf("\nColumn Family Statistics:\n") fmt.Printf(" Number of Levels: %d\n", stats.NumLevels) fmt.Printf(" Memtable Size: %d bytes\n", stats.MemtableSize)}Isolation Levels
TidesDB supports five MVCC isolation levels:
txn, err := db.BeginTxnWithIsolation(tidesdb.IsolationReadCommitted)if err != nil { log.Fatal(err)}defer txn.Free()Available Isolation Levels
IsolationReadUncommitted· Sees all data including uncommitted changesIsolationReadCommitted· Sees only committed data (default)IsolationRepeatableRead· Consistent snapshot, phantom reads possibleIsolationSnapshot· Write-write conflict detectionIsolationSerializable· Full read-write conflict detection (SSI)
Savepoints
Savepoints allow partial rollback within a transaction:
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}defer txn.Free()
err = txn.Put(cf, []byte("key1"), []byte("value1"), -1)
err = txn.Savepoint("sp1")err = txn.Put(cf, []byte("key2"), []byte("value2"), -1)
// Rollback to savepoint -- key2 is discarded, key1 remainserr = txn.RollbackToSavepoint("sp1")
// Or release savepoint without rolling backerr = txn.ReleaseSavepoint("sp1")
// Commit -- only key1 is writtenerr = txn.Commit()Savepoint API
Savepoint(name string)· Create a savepointRollbackToSavepoint(name string)· Rollback to savepointReleaseSavepoint(name string)· Release savepoint without rolling back
Transaction Reset
Reset resets a committed or aborted transaction for reuse with a new isolation level. This avoids the overhead of freeing and reallocating transaction resources in hot loops.
txn, err := db.BeginTxn()if err != nil { log.Fatal(err)}
// First batch of workerr = txn.Put(cf, []byte("key1"), []byte("value1"), -1)if err != nil { log.Fatal(err)}err = txn.Commit()if err != nil { log.Fatal(err)}
// Reset instead of Free + BeginTxnerr = txn.Reset(tidesdb.IsolationReadCommitted)if err != nil { log.Fatal(err)}
// Second batch of work using the same transactionerr = txn.Put(cf, []byte("key2"), []byte("value2"), -1)if err != nil { log.Fatal(err)}err = txn.Commit()if err != nil { log.Fatal(err)}
// Free once when donetxn.Free()Behavior
- The transaction must be committed or aborted before reset; resetting an active transaction returns an error
- Internal buffers are retained to avoid reallocation
- A fresh transaction ID and snapshot sequence are assigned based on the new isolation level
- The isolation level can be changed on each reset (e.g.,
IsolationReadCommitted→IsolationRepeatableRead)
When to use
- Batch processing · Reuse a single transaction across many commit cycles in a loop
- Connection pooling · Reset a transaction for a new request without reallocation
- High-throughput ingestion · Reduce malloc/free overhead in tight write loops
Reset vs Free + BeginTxn
For a single transaction, Reset is functionally equivalent to calling Free followed by BeginTxnWithIsolation. The difference is performance: reset retains allocated buffers and avoids repeated allocation overhead. This matters most in loops that commit and restart thousands of transactions.
B+tree KLog Format
Column families can optionally use a B+tree structure for the key log instead of the default block-based format.
cfConfig := tidesdb.DefaultColumnFamilyConfig()cfConfig.UseBtree = 1 // Enable B+tree klog format
err := db.CreateColumnFamily("btree_cf", cfConfig)if err != nil { log.Fatal(err)}Characteristics
- Point lookups · O(log N) tree traversal with binary search at each node
- Range scans · Doubly-linked leaf nodes enable efficient bidirectional iteration
- Immutable · Tree is bulk-loaded from sorted memtable data during flush
When to use B+tree klog format
- Read-heavy workloads with frequent point lookups
- Workloads where read latency is more important than write throughput
- Large SSTables where block scanning becomes expensive
Log Levels
TidesDB provides structured logging with multiple severity levels.
config := tidesdb.Config{ DBPath: "./mydb", LogLevel: tidesdb.LogDebug,}Available Log Levels
LogDebug· Detailed diagnostic informationLogInfo· General informational messages (default)LogWarn· Warning messages for potential issuesLogError· Error messages for failuresLogFatal· Critical errors that may cause shutdownLogNone· Disable all logging
Log to file
config := tidesdb.Config{ DBPath: "./mydb", LogLevel: tidesdb.LogDebug, LogToFile: true, // Write to ./mydb/LOG instead of stderr LogTruncationAt: 24 * 1024 * 1024, // Truncate log file at 24MB}Testing
# Run all testsgo test -v
# Run specific testgo test -v -run TestOpenClose
# Run with race detectorgo test -race -v
# Run B+tree testgo test -v -run TestBtreeColumnFamily
# Run clone column family testgo test -v -run TestCloneColumnFamily
# Run transaction reset testgo test -v -run TestTransactionReset
# Run checkpoint testgo test -v -run TestCheckpoint