Skip to content

TidesDB vs RocksDB: Performance Benchmarks

by Alex Gaetano Padula

published on November 17th, 2025

! This article presents performance data from an older version of TidesDB (v4.0.0) and should be considered outdated. Please refer to more recent benchmark analyses for current performance comparisons. This is just for historical reference and analysis.

This article presents comprehensive performance benchmarks comparing TidesDB and RocksDB, two LSM-tree based storage engines. Both are designed for write-heavy workloads, but they differ significantly in architecture, complexity, and performance characteristics.

We recommend you benchmark your own use case to determine which storage engine is best for your needs!

Test Environment

Hardware

  • Intel Core i7-11700K (8 cores, 16 threads) @ 4.9GHz
  • 48GB DDR4
  • Seagate ST4000DM004-2U9104 4TB HDD
  • Ubuntu 23.04 x86_64 6.2.0-39-generic

Software Versions

  • TidesDB v4.0.0
  • RocksDB v10.7.5 (via benchtool)
  • GCC with -O3 optimization
  • GNOME 44.3

Benchtool

The benchtool is a custom pluggable benchmarking tool that provides fair, apples-to-apples comparisons between storage engines. You can find the repo here: benchtool.

Configuration (matched for both engines)

  • Bloom filters: Enabled (10 bits per key)
  • Block cache: 64MB (HyperClockCache for RocksDB, FIFO for TidesDB)
  • Memtable flush size: 64MB
  • Sync mode: Disabled (maximum performance)
  • Compression: LZ4
  • Threads: 4 (all tests)

The benchtool measures

  • Operations per second (ops/sec)
  • Average, P50, P95, P99, min, max (microseconds)
  • Memory (RSS/VMS), disk I/O, CPU utilization
  • Write, space amplification
  • Full database scan performance

Benchmark Methodology

All tests use 4 threads for concurrent operations with a key size of 16 bytes (256 bytes for large value test) and value size of 100 bytes (64 bytes for small, 4KB for large). Sync mode is disabled for maximum throughput. Operations include 10M (writes/reads), 5M (mixed/delete/zipfian), 50M (small values), and 1M (large values).

Performance Summary Table

TestTidesDBRocksDBAdvantage
Sequential Write (10M)997K ops/sec652K ops/sec1.53x faster
Random Write (10M)956K ops/sec533K ops/sec1.80x faster
Random Read (10M)9.51M ops/sec1.81M ops/sec5.24x faster
Mixed Workload (5M)951K PUT / 2.05M GET636K PUT / 1.58M GET1.49x / 1.30x faster
Zipfian Write (5M)877K ops/sec492K ops/sec1.78x faster
Zipfian Mixed (5M)923K PUT / 1.26M GET476K PUT / 970K GET1.94x / 1.30x faster
Delete (5M)972K ops/sec695K ops/sec1.40x faster
Large Values (1M, 4KB)183K ops/sec34K ops/sec5.38x faster
Small Values (50M, 64B)923K ops/sec512K ops/sec1.80x faster

Detailed Benchmark Results

The following sections provide detailed results for each benchmark test with latency distributions, resource usage, and amplification factors.

1. Sequential Write Performance

10M operations, 4 threads, sequential keys

In sequential write testing with 10 million operations across 4 threads, TidesDB achieves 997K operations per second compared to RocksDB’s 652K ops/sec, representing a 1.53x throughput advantage. Average latency is impressive at 3.76μs with a P99 of 10μs and maximum of 3626μs. The most dramatic difference appears in iteration performance, where TidesDB scans at 11.11M ops/sec versus RocksDB’s 5.14M ops/sec, a 2.16x advantage demonstrating TidesDB’s efficient SSTable design.

Resource usage shows TidesDB utilizing 2704 MB RSS with 2309 MB disk writes and 716.7% CPU utilization, while RocksDB uses 2894 MB RSS with 1810 MB disk writes and 417.4% CPU utilization. TidesDB’s higher CPU utilization indicates better multi-core scaling. Write amplification measures 2.09x for TidesDB versus 1.64x for RocksDB, while space amplification is 1.69x versus 0.31x respectively.

TidesDB’s larger database size reflects its architectural design choices: embedded succinct trie indexes in SSTables for fast lookups and a simplified LSM structure (active memtable → immutable memtables → SSTables). While TidesDB triggers compaction aggressively (when SSTable count reaches a threshold, default 32), its embedded indexes and architectural design result in larger on-disk footprint compared to RocksDB’s multi-level LSM tree.

2. Random Write Performance

10M operations, 4 threads, random keys

Random write performance demonstrates strong results for TidesDB. Throughput reaches 956K operations per second versus RocksDB’s 533K ops/sec, an 80% advantage (1.80x faster). Average latency measures 3.90μs with a P50 of 4μs, P95 of 6μs, P99 of 10μs, and maximum of 8645μs. Iteration speed maintains TidesDB’s advantage at 11.27M ops/sec compared to RocksDB’s 5.14M ops/sec, a 2.19x improvement highlighting the efficiency of TidesDB’s two-tier LSM architecture.

Resource consumption shows TidesDB using 2509 MB RSS with 2926 MB disk writes and achieving 720.0% CPU utilization, while RocksDB uses 2876 MB RSS with 2052 MB disk writes and 347.0% CPU utilization. TidesDB’s significantly higher CPU utilization (720% vs 347%) indicates superior multi-core scaling, effectively leveraging available CPU resources. Write amplification is 2.65x for TidesDB versus 1.85x for RocksDB, while space amplification measures 1.60x versus 0.44x respectively.

3. Random Read Performance

10M operations, 4 threads, random keys (pre-populated database)

Random read testing reveals TidesDB’s most impressive performance advantage. With 10 million operations across 4 threads on a pre-populated database, TidesDB achieves 9.51 million operations per second compared to RocksDB’s 1.81 million ops/sec, a remarkable 5.24x improvement. Average latency is exceptional at just 0.28μs versus RocksDB’s 1.94μs, representing 6.9x lower latency.

The latency distribution tells an even more compelling story. TidesDB’s P50 latency of 0μs indicates that most reads complete in under 1 microsecond, with P95 and P99 both at 1μs and a maximum of 62μs. In contrast, RocksDB shows P50 of 2μs, P95 of 3μs, P99 of 5μs, and maximum of 426μs. This sub-microsecond read performance demonstrates the effectiveness of TidesDB’s lock-free read architecture, where readers never acquire locks, never block, and scale linearly with CPU cores.

Resource usage shows TidesDB utilizing 2530 MB RSS with 661.6% CPU utilization, while RocksDB uses significantly less memory at 191 MB RSS with 335.3% CPU utilization. Despite the higher memory footprint, TidesDB’s performance advantage is undeniable for read-heavy workloads.

4. Mixed Workload (50% Reads, 50% Writes)

5M operations, 4 threads, random keys

Mixed workload testing with 5 million operations demonstrates TidesDB’s ability to excel at both reads and writes simultaneously. Write throughput reaches 951K PUT operations per second compared to RocksDB’s 636K ops/sec, a 49% advantage (1.49x faster). Read performance is equally impressive at 2.05 million GET operations per second versus RocksDB’s 1.58 million ops/sec, representing a 30% improvement (1.30x faster). Iteration speed shows a dramatic 2.67x advantage at 13.57M ops/sec compared to RocksDB’s 5.07M ops/sec, demonstrating superior scan performance.

Latency metrics reveal TidesDB’s consistency under mixed load. PUT operations average 3.91μs with P50 of 4μs, P95 of 6μs, P99 of 10μs, and maximum of 4044μs. GET operations are even faster, averaging 1.54μs with P50 of 1μs, P95 of 3μs, P99 of 4μs, and maximum of 1758μs. This demonstrates that TidesDB maintains low latency for both operation types even when they’re running concurrently.

Resource consumption shows TidesDB using 1263 MB RSS with 1736 MB disk writes and 689.8% CPU utilization, while RocksDB uses 1508 MB RSS with 907 MB disk writes and 377.1% CPU utilization. Write amplification measures 3.14x for TidesDB versus 1.64x for RocksDB, while space amplification is 1.67x versus 0.49x respectively. Despite higher amplification factors, TidesDB’s throughput advantages make it compelling for mixed workloads where performance is prioritized.

5. Hot Key Workload (Zipfian Distribution)

5.1 Zipfian Write

5M operations, 4 threads, Zipfian distribution (hot keys)

Zipfian distribution testing simulates real-world hot key scenarios following the 80/20 rule, where approximately 20% of keys receive 80% of the traffic. With 5 million operations generating roughly 661K unique keys, TidesDB achieves 877K operations per second compared to RocksDB’s 492K ops/sec, maintaining a 78% throughput advantage (1.78x faster) even with concentrated access patterns. Average latency measures 4.07μs with P50 of 4μs, P95 of 7μs, P99 of 12μs, and maximum of 5384μs.

Interestingly, iteration performance shows RocksDB with an advantage at 1.99M ops/sec versus TidesDB’s 928K ops/sec, a 2.14x improvement for RocksDB. This demonstrates the effectiveness of RocksDB’s multi-level architecture and sophisticated caching strategies when dealing with hot keys, where frequently accessed data benefits from being cached at multiple levels. Write amplification is 2.57x for TidesDB versus 1.32x for RocksDB, while space amplification shows a dramatic difference at 1.87x versus 0.13x, with RocksDB achieving 14.4x better space efficiency due to its aggressive compaction of duplicate keys.

5.2 Zipfian Mixed

5M operations, 4 threads, Zipfian distribution, 50/50 read/write

Hot key mixed workload testing combines the Zipfian distribution with simultaneous reads and writes, creating a realistic scenario where popular keys receive concentrated traffic. TidesDB achieves 923K PUT operations per second compared to RocksDB’s 476K ops/sec, representing a 94% throughput advantage (1.94x faster). Read performance shows 1.26 million GET operations per second versus RocksDB’s 970K ops/sec, a 30% improvement (1.30x faster).

Latency characteristics remain strong under this challenging workload. PUT operations average 3.85μs with P50 of 4μs, P95 of 6μs, P99 of 11μs, and maximum of 3783μs. GET operations average 2.89μs with P50 of 2μs, P95 of 8μs, P99 of 14μs, and maximum of 226μs. These metrics demonstrate TidesDB’s ability to maintain consistent performance even when dealing with skewed access patterns where a small subset of keys receives the majority of traffic.

Write amplification measures 2.75x for TidesDB versus 1.32x for RocksDB, while space amplification shows a dramatic difference at 1.94x versus 0.10x, with RocksDB achieving 19.4x better space efficiency. This extreme space efficiency for RocksDB in hot key scenarios results from its aggressive compaction of duplicate keys, where the same keys are repeatedly updated and older versions are quickly discarded.

6. Delete Performance

5M operations, 4 threads, random keys (pre-populated database)

Deletion performance testing with 5 million operations on a pre-populated database shows TidesDB achieving 972K operations per second compared to RocksDB’s 695K ops/sec, representing a 40% throughput advantage (1.40x faster). Average latency is 3.99μs versus RocksDB’s 5.65μs, demonstrating 42% lower latency (1.42x faster). The latency distribution shows TidesDB with P50 of 4μs, P95 of 5μs, and P99 of 7μs, while RocksDB shows P50 of 5μs, P95 of 7μs, and P99 of 10μs.

An interesting characteristic appears in the maximum latency measurements. TidesDB shows a maximum of 26.1ms compared to RocksDB’s 961μs, a significantly higher tail latency. This is likely attributable to background compaction operations that periodically run to merge SSTables and remove tombstones. While these occasional spikes are present, the consistently lower average and P99 latencies demonstrate that TidesDB maintains superior performance for the vast majority of operations.

Resource consumption shows TidesDB using 1304 MB RSS with 946 MB disk writes and 770.6% CPU utilization, while RocksDB uses significantly less at 169 MB RSS with 301 MB disk writes and 392.3% CPU utilization. TidesDB’s higher CPU utilization again demonstrates its effective use of multi-core resources, though RocksDB’s lower memory footprint and disk writes reflect its more conservative resource usage during deletion operations.

7. Large Value Performance

1M operations, 4 threads, 256B keys, 4KB values

Large value testing with 1 million operations using 256-byte keys and 4KB values reveals TidesDB’s most dramatic performance advantage. Throughput reaches 183K operations per second compared to RocksDB’s 34K ops/sec, an extraordinary 5.38x improvement. This represents TidesDB’s largest performance advantage across all tested workloads. Average latency measures 17.08μs with P50 of 9μs, P95 of 21μs, P99 of 31μs, and maximum of 23.8ms. Iteration speed shows 1.25 million ops/sec versus RocksDB’s 398K ops/sec, a 3.15x advantage.

Interestingly, write amplification characteristics reverse with large values. TidesDB achieves 1.09x write amplification compared to RocksDB’s 1.35x, making TidesDB more efficient in terms of write overhead when handling larger data blocks. This suggests TidesDB’s architecture is particularly well-suited for applications storing larger objects, documents, or serialized data structures. Space amplification measures 0.85x for TidesDB versus 0.10x for RocksDB, with the database sizes being 3519 MB versus 436 MB respectively (8.1x smaller for RocksDB).

Resource consumption shows TidesDB using 3773 MB RSS with 4530 MB disk writes and 638.3% CPU utilization, while RocksDB uses 2790 MB RSS with 5600 MB disk writes but only 110.1% CPU utilization. The dramatically lower CPU utilization for RocksDB (110% vs 638%) suggests it may be I/O bound with large values, while TidesDB’s higher CPU usage indicates it’s effectively parallelizing the workload across multiple cores even with larger data blocks.

8. Small Value Performance

50M operations, 4 threads, 16B keys, 64B values

Small value testing at massive scale with 50 million operations using 16-byte keys and 64-byte values demonstrates TidesDB’s sustained performance characteristics. Throughput reaches 923K operations per second compared to RocksDB’s 512K ops/sec, maintaining an 80% advantage (1.80x faster) even at this extreme scale. Average latency measures 4.10μs with P50 of 3μs, P95 of 6μs, P99 of 8μs, and maximum of 70.2ms. The higher maximum latency likely reflects occasional background compaction operations across the large dataset.

Iteration performance shows an interesting reversal, with RocksDB achieving 5.46 million ops/sec versus TidesDB’s 4.14 million ops/sec, a 1.32x advantage for RocksDB. This suggests that at massive scale with small, cache-friendly values, RocksDB’s multi-level architecture and block-based storage may provide better sequential scan characteristics. Write amplification favors TidesDB at 2.40x versus RocksDB’s 2.56x, indicating TidesDB writes slightly less data to disk relative to the logical data size. Space amplification measures 1.70x for TidesDB versus 0.34x for RocksDB, with database sizes of 6502 MB versus 1310 MB respectively (5.0x smaller for RocksDB).

Resource consumption at this scale shows both engines using substantial memory. TidesDB utilizes 11806 MB RSS with 9149 MB disk writes and 660.4% CPU utilization, while RocksDB uses 11388 MB RSS with 9749 MB disk writes and 337.5% CPU utilization. The similar memory footprints suggest both engines are effectively caching data at this scale, though TidesDB continues to demonstrate nearly 2x higher CPU utilization, indicating more aggressive parallelization of the workload across available cores.

Key Findings

TidesDB Strengths

TidesDB demonstrates exceptional write performance across all workloads, ranging from 1.40x to 5.38x faster than RocksDB. Sequential writes show a 1.53x advantage, random writes achieve 1.80x faster throughput, and the most dramatic improvement appears with large 4KB values at 5.38x faster. Even with small 64-byte values at massive scale (50 million operations), TidesDB maintains an 1.80x throughput advantage.

Read performance reveals even more impressive advantages, spanning 1.30x to 5.24x faster than RocksDB. Random reads achieve a remarkable 5.24x improvement, reaching 9.51 million operations per second compared to RocksDB’s 1.81 million ops/sec. The sub-microsecond average latency of 0.28μs is exceptional, with a P50 latency of 0μs indicating that most reads complete in under 1 microsecond. This demonstrates the effectiveness of TidesDB’s lock-free read architecture where readers never acquire locks, never block, and scale linearly with CPU cores.

Iteration speed for full database scans shows consistent advantages of 2.16x to 2.67x faster than RocksDB. Sequential iteration reaches 11.11 million ops/sec versus RocksDB’s 5.14 million ops/sec, while mixed workload iteration achieves 13.57 million ops/sec compared to 5.07 million ops/sec. This superior scan performance makes TidesDB particularly well-suited for analytics and batch processing workloads.

CPU utilization metrics reveal TidesDB’s superior multi-core scaling, achieving 660-770% utilization compared to RocksDB’s 335-417%. This indicates TidesDB more effectively leverages available CPU resources across multiple cores. Write amplification shows interesting characteristics, with TidesDB achieving better amplification (1.09x vs 1.35x) specifically with large values, though generally showing higher amplification due to temporary index file creation during SSTable building.

RocksDB Strengths

RocksDB demonstrates superior space efficiency with database sizes ranging from 3.4x to 19.4x smaller than TidesDB. Sequential write tests show RocksDB using 343 MB versus TidesDB’s 1866 MB (5.4x smaller), Zipfian distribution results in 69 MB versus 1034 MB (15x smaller), and small value tests produce 1310 MB versus 6502 MB (5.0x smaller). This difference stems from RocksDB’s multi-level LSM architecture (L0→L1→…→L6) with level-based compaction, more efficient compression, and smaller index structures compared to TidesDB’s embedded succinct trie indexes. These measurements exclude temporary index construction files (trie_*, *.tmp) but include WAL files and SSTables present at measurement time. The comparison is fair as both engines are measured identically, though the absolute sizes reflect the database state immediately after benchmark completion rather than after full cleanup.

Write amplification is generally lower for RocksDB, ranging from 1.32x to 1.85x compared to TidesDB’s 2.09x to 3.14x. RocksDB’s multi-level compaction strategy spreads writes across multiple levels, while TidesDB’s architecture (memtables → SSTables) with count-based compaction triggers (default: 32 SSTables) results in different write patterns. TidesDB’s write amplification includes all disk I/O during the benchmark, including writes to temporary index files. Space amplification measures the database size immediately after benchmark completion (excluding temporary index files but including WAL files and SSTables). The comparison is fair as both engines are measured identically. Space amplification shows RocksDB with significantly better efficiency at 0.10x to 0.49x versus TidesDB’s 1.60x to 1.94x, reflecting the fundamental architectural trade-offs between the two engines.

Hot key iteration performance favors RocksDB with a 2.14x advantage in Zipfian distribution tests, demonstrating the effectiveness of RocksDB’s multi-level architecture and sophisticated caching strategies when dealing with concentrated access patterns. Memory efficiency also tends to favor RocksDB with lower RSS usage in most tests, though TidesDB’s higher memory consumption often correlates with its superior throughput and CPU utilization.

Conclusion

TidesDB v4.0.0 consistently outperforms RocksDB in throughput and latency across all workloads tested. Write performance ranges from 1.40x to 5.38x faster with an average improvement of approximately 1.8x, while read performance shows even more dramatic advantages spanning 1.30x to 5.24x faster with an average of around 2.5x. Iteration speed for full database scans demonstrates consistent superiority at 2.16x to 2.67x faster than RocksDB. Latency characteristics are exceptional, with sub-microsecond read performance averaging 0.28μs and write latency consistently in the 3-4μs range.

RocksDB maintains significant advantages in space efficiency, with database sizes ranging from 3x to 19x smaller than TidesDB. This reflects fundamental architectural differences: RocksDB’s multi-level LSM tree (L0→L1→…→L6) with level-based compaction versus TidesDB’s simpler structure (memtables → SSTables) with embedded succinct trie indexes and count-based compaction triggers. Write amplification is generally lower for RocksDB at 1.3-1.9x compared to TidesDB’s 2.1-3.1x, as RocksDB’s level-based strategy spreads writes across multiple levels while TidesDB consolidates data at the SSTable tier. Memory footprint also tends to favor RocksDB with lower RSS usage in most scenarios, though TidesDB’s higher memory usage correlates with its superior performance characteristics.

Choosing the Right Storage Engine

The decision between TidesDB and RocksDB ultimately depends on your application’s priorities and constraints. TidesDB is the optimal choice for applications requiring maximum throughput and low latency, particularly those with read-heavy or mixed workloads where performance is the primary concern. Its simpler codebase of approximately 27,000 lines compared to RocksDB’s 300,000 lines makes it easier to understand, debug, and maintain. Applications prioritizing raw performance over disk space will benefit significantly from TidesDB’s consistent advantages across all operation types.

RocksDB remains the better choice for disk space-constrained environments where storage efficiency is paramount. Applications requiring minimal write amplification to extend SSD lifespan will appreciate RocksDB’s lower overhead. The mature ecosystem with extensive tooling, monitoring capabilities, and community knowledge makes RocksDB easier to deploy and operate in production environments. Hot key workloads with skewed access patterns may also benefit from RocksDB’s sophisticated multi-level caching strategies, as demonstrated by its 2.14x iteration advantage in Zipfian distribution tests.

The fundamental trade-off is clear: TidesDB prioritizes raw performance with exceptional throughput and sub-microsecond latency, while RocksDB prioritizes space efficiency and lower write amplification. For most modern applications where disk space is relatively inexpensive and performance directly impacts user experience, TidesDB’s advantages in throughput and latency make it a compelling choice. However, for applications operating under strict storage constraints or requiring minimal write amplification for SSD longevity, RocksDB’s space efficiency and lower amplification factors remain valuable.


Thanks for reading!