Skip to content

TidesDB v7.2.3 & RocksDB v10.9.1 Benchmark Analysis

TidesDB v7.2.3 & RocksDB v10.9.1 Benchmark Analysis

by Alex Gaetano Padula

published on January 15th, 2026

Following the recent v7.2.3 release, I ran another comprehensive benchmark suite comparing TidesDB against RocksDB v10.9.1. This article presents detailed performance analysis of TidesDB v7.2.3, which includes additional performance optimizations and stability improvements. Both engines are configured with sync disabled to measure the absolute performance ceiling.

Test Configuration

The test environment used 8 threads across various workloads with 16-byte keys and 100-byte values as the baseline configuration. Tests were conducted on the same hardware to ensure fair comparison.

We recommend you benchmark your own use case to determine which storage engine is best for your needs!

Hardware

  • Intel Core i7-11700K (8 cores, 16 threads) @ 4.9GHz
  • 48GB DDR4
  • Western Digital 500GB WD Blue 3D NAND Internal PC SSD (SATA)
  • Ubuntu 24.04 LTS

Software Versions

  • TidesDB v7.2.3
  • RocksDB v10.9.1
  • GCC with -O3 optimization

Default Test Configuration

  • Sync Mode · DISABLED (maximum performance)
  • Default Batch Size · 1000 operations
  • Threads · 8 concurrent threads
  • Key Size · 16 bytes
  • Value Size · 100 bytes

Large Value tests use 256-byte keys with 4KB values; Small Value tests use 16-byte keys with 64-byte values.

You can download the raw benchtool report here

You can find the benchtool source code here and run your own benchmarks!

Performance Overview

Sequential Write Performance

The first test measures pure sequential write throughput with 10M operations and batch size of 1000.

Results

EngineThroughput (ops/sec)Duration (sec)Avg Latency (μs)p99 Latency (μs)Max Latency (μs)CV (%)
TidesDB7,147,5751.3991,0281,7984,01926.25
RocksDB2,272,7514.4003,5194,848364,824348.92
Ratio3.14x

TidesDB achieves 3.14x higher throughput with dramatically better tail latencies. RocksDB’s maximum latency of 364ms versus TidesDB’s 4ms reveals vastly different consistency profiles. The coefficient of variation tells the story: TidesDB at 26.25% versus RocksDB’s 348.92%. This means RocksDB exhibits highly unpredictable performance with occasional extreme outliers.

Write amplification comparison:

  • TidesDB · 1.08x
  • RocksDB · 1.43x

TidesDB’s 32% lower write amplification matters significantly for write-heavy workloads where every byte written translates to storage wear and I/O overhead.

Database sizes after 10M writes:

  • TidesDB · 110.66 MB
  • RocksDB · 207.93 MB

TidesDB achieves 47% smaller database size, demonstrating superior space efficiency.

Random Write Performance

Random writes are significantly harder than sequential writes. Here’s 10M random write operations with batch size of 1000:

Results

EngineThroughput (ops/sec)Avg Latency (μs)p99 Latency (μs)Max Latency (μs)CV (%)
TidesDB2,425,2263,0066,1607,35639.26
RocksDB1,434,1225,5776,6411,200,202630.40
Ratio1.69x

The gap narrows to 1.69x but TidesDB still dominates. More interesting is the latency distribution - RocksDB hits 1.2 seconds on the max latency while TidesDB stays under 8ms. That 630% coefficient of variation for RocksDB indicates wildly unpredictable performance with severe tail latency spikes.

Write amplification:

  • TidesDB · 1.12x
  • RocksDB · 1.32x

Database size:

  • TidesDB · 90.29 MB (smaller)
  • RocksDB · 116.55 MB

Random Read Performance

10M random read operations from a pre-populated database:

Results

EngineThroughput (ops/sec)Avg Latency (μs)p50 (μs)p95 (μs)p99 (μs)Max (μs)
TidesDB2,923,0332.532.004.005.00913
RocksDB1,361,4795.545.0010.0014.004,049
Ratio2.15x

TidesDB delivers 2.15x higher throughput with sub-microsecond p50 latency (2μs vs 5μs). The consistency is remarkable - TidesDB’s p99 latency of 5μs means 99% of reads complete in under 5 microseconds. RocksDB’s maximum latency of 4ms versus TidesDB’s 913μs shows 4.4x better tail behavior.

This performance comes from TidesDB’s optimized block cache and efficient skip list implementation with early termination in the read path.

Seek Performance (Block Index Effectiveness)

Seek operations test the efficiency of block index lookups. These benchmarks measure how quickly the engine can position an iterator at a specific key.

Random Seek

5M random seek operations:

  • TidesDB · 1,365,406 ops/sec
  • RocksDB · 916,721 ops/sec
  • Advantage · 1.49x faster

Average latency · 5.41μs vs 7.69μs

Sequential Seek

5M sequential seek operations:

  • TidesDB · 1,727,162 ops/sec
  • RocksDB · 1,818,731 ops/sec
  • Advantage · 0.95x (RocksDB 1.05x faster)

RocksDB shows a slight edge on sequential seeks, likely due to its level-based organization benefiting sequential access patterns.

Zipfian Seek (Hot Keys)

5M seek operations with Zipfian distribution (~660K unique keys):

  • TidesDB · 3,235,109 ops/sec
  • RocksDB · 619,098 ops/sec
  • Advantage · 5.22x faster

Average latency · 1.39μs vs 11.97μs

The 5.22x advantage on Zipfian seeks is the largest performance gap in all benchmarks. Hot keys consolidated into fewer SSTables by TidesDB’s Spooky compaction means fewer file seeks and dramatically better cache hit rates. Each seek operation uses the block index to jump directly to the appropriate SSTable block, and TidesDB’s aggressive compaction of hot keys makes this extraordinarily efficient.

The Zipfian mixed workload (50/50 read/write with hot keys) shows similarly dominant results:

  • PUT · 3,117,438 vs 1,523,813 ops/sec (2.05x faster)
  • GET · 2,910,674 vs 1,799,383 ops/sec (1.62x faster)
  • Database size · 10.21 MB vs 65.39 MB (84% smaller!)

Database sizes after Zipfian workload:

  • TidesDB · 10.24 MB
  • RocksDB · 37.37 MB

TidesDB achieves 73% smaller database by consolidating hot keys. This dramatic space efficiency comes from Spooky compaction’s ability to recognize and merge frequently-accessed keys into optimally-sized SSTables.

Mixed Workload (50/50 Read/Write)

5M total operations (2.5M reads, 2.5M writes) with random keys:

Write Performance

  • TidesDB · 2,608,626 ops/sec
  • RocksDB · 2,037,379 ops/sec
  • Advantage · 1.28x faster

Read Performance

  • TidesDB · 1,477,318 ops/sec
  • RocksDB · 1,353,572 ops/sec
  • Advantage · 1.09x faster

Read latency breakdown

  • Average · 4.93μs vs 5.39μs
  • p99 · 16μs vs 15μs
  • Max · 3,769μs vs 4,030μs

TidesDB shows faster throughput on both reads and writes in this balanced workload, demonstrating strong performance across mixed access patterns. The coefficient of variation for reads is 91.30% vs 67.32%, showing RocksDB has tighter consistency in this particular workload, though TidesDB still maintains competitive tail latencies.

Delete Performance

Batched Deletes (batch=1000)

5M delete operations in batches of 1000:

  • TidesDB · 2,883,676 ops/sec
  • RocksDB · 3,092,427 ops/sec
  • Advantage · 0.93x (RocksDB 1.07x faster)

Average latency · 2,640μs vs 2,586μs

Write amplification:

  • TidesDB · 0.18x
  • RocksDB · 0.29x

Both engines perform similarly since deletes are tombstone writes. RocksDB shows a slight edge in throughput, but TidesDB’s 38% lower write amplification demonstrates more efficient tombstone compaction.

Unbatched Deletes (batch=1)

5M individual delete operations:

  • TidesDB · 1,142,446 ops/sec
  • RocksDB · 917,000 ops/sec
  • Advantage · 1.25x faster

Average latency · 6.76μs vs 8.59μs

Without batching, TidesDB’s lock-free architecture provides clearer advantages with 25% higher throughput.

Range Queries

100-Key Range Scans

1M range queries, each returning 100 consecutive keys:

Sequential Keys

  • TidesDB · 471,836 ops/sec
  • RocksDB · 443,209 ops/sec
  • Advantage · 1.06x faster

Average latency · 16.20μs vs 17.65μs

Random Keys

  • TidesDB · 366,062 ops/sec
  • RocksDB · 298,534 ops/sec
  • Advantage · 1.23x faster

Average latency · 20.17μs vs 26.33μs

1000-Key Range Scans

500K range queries, each returning 1000 consecutive keys:

  • TidesDB · 50,022 ops/sec
  • RocksDB · 47,349 ops/sec
  • Advantage · 1.06x faster

Average latency · 156.57μs vs 165.47μs

TidesDB shows consistent advantages across range queries of different sizes, with particularly strong performance on random range scans (1.23x faster). The improved skip list iterator in v7.2.3 contributes to these results.

Batch Size Impact

Testing 10M write operations with varying batch sizes:

Batch SizeTidesDB (ops/sec)RocksDB (ops/sec)Ratio
1 (no batch)1,028,337851,9151.21x
102,644,8321,554,9341.70x
1002,974,0762,026,8141.47x
10002,425,2261,434,1221.69x

TidesDB outperforms RocksDB across all batch sizes, with advantages ranging from 1.21x to 1.70x. The largest gap appears at batch size 10, where TidesDB’s optimized batch write path shows its strength.

Average latency patterns:

  • Batch=1 · 7.44μs vs 9.10μs (TidesDB 18% better)
  • Batch=10 · 28.94μs vs 51.35μs (TidesDB 44% better)
  • Batch=100 · 261.53μs vs 394.56μs (TidesDB 34% better)
  • Batch=1000 · 3,006μs vs 5,577μs (TidesDB 46% better)

The consistency advantage is even more pronounced. Coefficient of variation comparison:

  • Batch=1 · 565.61% vs 1182.35%
  • Batch=10 · 188.90% vs 3905.97%
  • Batch=100 · 125.16% vs 1017.94%
  • Batch=1000 · 39.26% vs 630.40%

RocksDB shows extreme variability at smaller batch sizes with CV exceeding 1000%, while TidesDB maintains much tighter distributions.

Large Value Performance

1M write operations with 256-byte keys and 4KB values:

  • TidesDB · 301,211 ops/sec
  • RocksDB · 140,257 ops/sec
  • Advantage · 2.15x faster

Average latency · 23,958μs vs 56,938μs
p99 latency · 59,775μs vs 603,365μs (10x better!)

The 2.15x advantage on large values is impressive. More striking is the tail latency difference - RocksDB’s p99 hitting 603ms versus TidesDB’s 60ms represents a 10x improvement. The coefficient of variation tells the consistency story: 32.41% vs 183.81%.

Write amplification:

  • TidesDB · 1.05x
  • RocksDB · 1.21x

Database size:

  • TidesDB · 302.03 MB (13% smaller)
  • RocksDB · 346.71 MB

TidesDB’s key-value separation architecture excels with larger values, keeping keys in SSTables while storing values efficiently.

Small Value Performance

50M write operations with 16-byte keys and 64-byte values:

  • TidesDB · 1,817,450 ops/sec
  • RocksDB · 1,412,926 ops/sec
  • Advantage · 1.29x faster

Average latency · 4,297μs vs 5,662μs
Max latency · 266ms vs 1,863ms (7x better!)

Write amplification:

  • TidesDB · 1.19x
  • RocksDB · 1.53x

Database size:

  • TidesDB · 514.18 MB (12% larger)
  • RocksDB · 459.50 MB

On small values, TidesDB trades slightly higher space usage for better throughput and dramatically better write amplification (29% lower).

Iteration Performance

Full database iteration speeds after various workloads:

Write Workloads

  • Sequential write (10M keys) · 8.03M vs 5.18M (1.55x faster)
  • Random write (10M keys) · 3.03M vs 3.99M (0.76x slower)
  • Zipfian write (658K keys) · 3.65M vs 0.95M (3.86x faster)

Read Workloads

  • Random read (10M keys) · 8.25M vs 5.81M (1.42x faster)

Mixed Workloads

  • 50/50 mixed (5M keys) · 2.98M vs 4.52M (0.66x slower)

TidesDB shows exceptional iteration performance on sequential and Zipfian workloads. The 3.86x advantage on Zipfian iteration demonstrates how Spooky compaction’s aggressive consolidation of hot keys dramatically improves scan performance.

Resource Usage

Memory Consumption (Peak RSS)

TidesDB uses more memory in most scenarios, which aligns with its transient, memory-optimized design:

  • 10M sequential write · 2,478 MB vs 2,748 MB (10% less)
  • 10M random write · 2,486 MB vs 2,713 MB (8% less)
  • 1M large values · 3,393 MB vs 1,210 MB (180% more)
  • 50M small values · 8,911 MB vs 8,483 MB (5% more)
  • 10M random read · 1,690 MB vs 294 MB (475% more)

The high memory usage on reads reflects TidesDB’s aggressive caching strategy for maximum read performance.

Disk I/O

Disk writes (MB written):

  • 10M sequential · 1,200 MB vs 1,585 MB (24% less)
  • 10M random · 1,236 MB vs 1,462 MB (15% less)
  • 1M large values · 4,363 MB vs 5,011 MB (13% less)
  • 50M small values · 4,531 MB vs 5,831 MB (22% less)

TidesDB consistently writes 13-24% less data to disk, reducing SSD wear and improving throughput.

CPU Utilization

TidesDB shows higher CPU utilization in most workloads:

  • Sequential writes · 501% vs 281% (1.78x higher)
  • Random writes · 540% vs 277% (1.95x higher)
  • Random reads · 530% vs 648% (0.82x lower)

The higher CPU usage reflects TidesDB’s lock-free algorithms trading CPU cycles for reduced lock contention. On highly parallel workloads with available CPU cores, this trade-off delivers higher throughput.

Tail Latency · Where TidesDB Truly Shines

One of the most striking differences between TidesDB and RocksDB is tail latency behavior. While average throughput tells part of the story, maximum latencies reveal how each engine handles worst-case scenarios - critical for applications requiring predictable performance.

Maximum Latency Comparison (lower is better)

WorkloadTidesDB MaxRocksDB MaxTidesDB Advantage
Sequential Write4,019 μs364,824 μs91x better
Random Write7,356 μs1,200,202 μs163x better
Batch=10 Write32,301 μs436,487 μs14x better
Batch=100 Write42,867 μs298,039 μs7x better
Small Value (64B)266,008 μs1,863,272 μs7x better
Large Value (4KB)107,425 μs848,767 μs8x better

RocksDB’s maximum latencies frequently exceed 1 second, while TidesDB keeps worst-case latencies under 300ms in all tested scenarios. For random writes, RocksDB’s 1.2-second max latency versus TidesDB’s 7ms represents a 163x improvement - the largest tail latency gap in all benchmarks.

This consistency comes from TidesDB’s architecture:

  • Lock-free data structures eliminate blocking on concurrent operations
  • Predictable and background compaction avoids sudden I/O storms
  • Memory and CPU optimized design reduces disk-induced latency spikes

For latency-sensitive applications like real-time analytics, gaming backends, or financial systems, this predictability is often more valuable than raw throughput.

Summary

TidesDB v7.2.3 demonstrates absolutely substantial performance advantages over RocksDB v10.9.1 across the majority of workloads tested:

Write Performance

  • 3.14x faster sequential writes
  • 1.69x faster random writes
  • 2.15x faster large value (4KB) writes
  • Consistent 1.21-1.70x advantages across all batch sizes

Read Performance

  • 2.15x faster point lookups
  • 1.49x faster random seeks
  • Sub-microsecond p50 latency (2μs)

Hot Key Excellence

  • 5.22x faster Zipfian seeks (largest advantage across all benchmarks)
  • 3.86x faster Zipfian iteration
  • 73% smaller databases for hot key workloads

Resource Efficiency

  • 13-32% lower write amplification
  • 13-24% less disk I/O
  • 5-47% smaller databases for most workloads

Latency Consistency

  • Dramatically tighter latency distributions (CV 26-630% better)
  • 10x better p99 latencies on large values
  • Up to 163x better maximum latencies (random writes: 7ms vs 1.2 seconds)
  • No extreme tail latency spikes across all workloads

Where RocksDB Leads

  • Batched deletes (1.07x faster)
  • Sequential seeks (1.05x faster)
  • Memory efficiency (3-6x less RAM in some workloads)
  • Some iteration workloads (random writes, mixed)

The choice between engines depends on your workload characteristics and constraints. TidesDB excels at:

  • Write-heavy workloads requiring high throughput
  • Hot key patterns (social feeds, caching, analytics)
  • Point lookup and seek operations
  • Scenarios where latency consistency is critical
  • Applications that can trade transient memory for performance

RocksDB remains competitive for:

  • Memory-constrained environments
  • Workloads favoring sequential access patterns
  • Applications requiring minimal resource footprint

The v7.2.3 release solidifies TidesDB’s position as a high-performance alternative to RocksDB, particularly for transient, memory-rich deployments where throughput and latency consistency are paramount.

Thanks for reading!


Join the TidesDB Discord for more updates and discussions at https://discord.gg/tWEmjR66cy