Skip to content

Benchmark Analysis on TidesDB v7.4.0(mimalloc) & RocksDB v10.9.1 (jemalloc)

TidesDB v7.4.0 vs RocksDB v10.9.1 Benchmarks

by Alex Gaetano Padula

published on January 25th, 2026

TidesDB v7.4.0 outperforms RocksDB v10.9.1 across nearly all benchmarks:

  • Writes · 1.6-4x faster with 10-27x more stable latencies
  • Reads · Faster iteration (1.42x), seeks (up to 5.19x), range queries (1.15-1.25x), and hot-key GETs (1.72x)
  • Latency · Better p50/p99 on both reads and writes, even when throughput is similar
  • Space · 0.08-0.10x amplification vs 0.13-0.19x

Both engines tested with optimized allocators (TidesDB with mimalloc, RocksDB with jemalloc).

You can download the raw benchtool report #1 here (RocksDB jemalloc & TidesDB mimalloc)

You can download the raw benchtool report #2 here

You can find the benchtool source code here and run your own benchmarks!

Introduction

As usual this article presents benchmark results comparing TidesDB against RocksDB. The goal is to provide reproducible, honest numbers that help developers make informed decisions about which engine fits their workload.

Test Environment

ComponentSpecification
CPUIntel Core i7-11700K @ 3.60GHz (8 cores, 16 threads)
Memory46 GB
KernelLinux 6.2.0-39-generic
DiskWestern Digital 500GB WD Blue 3D NAND Internal PC SSD (SATA)

Test Configuration

  • Sync mode: Disabled (maximum performance mode)
  • Default batch size: 1000
  • Default threads: 8
  • Key size: 16 bytes (unless noted)
  • Value size: 100 bytes (unless noted)

Sequential Write Performance

Sequential writes are the best-case scenario for LSM-tree engines. Keys arrive in sorted order, minimizing compaction overhead.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput7,115,164 ops/sec1,801,804 ops/sec3.95x faster
Duration1.405 sec5.550 sec
Avg Latency1,044 μs4,439 μs4.3x lower
p99 Latency1,887 μs4,458 μs2.4x lower
Max Latency3,595 μs920,109 μs256x lower
Latency CV25.36%678.52%27x more stable
Write Amp1.09x1.41x
Space Amp0.10x0.19x
Peak RSS2,479 MB2,752 MB
DB Size111 MB208 MB

The standout number here is the latency coefficient of variation (CV). TidesDB’s 25% CV indicates predictable latency, while RocksDB’s 678% CV reflects significant variance - likely from background compaction stalls. The 920ms max latency spike in RocksDB is a classic symptom of write stalls during L0->L1 compaction.

Random Write Performance

Random writes stress the LSM-tree more heavily. Keys arrive out of order, creating more overlap between SST files and increasing compaction work.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput2,522,416 ops/sec1,566,226 ops/sec1.61x faster
Duration3.964 sec6.385 sec
Avg Latency2,985 μs5,106 μs1.7x lower
p99 Latency5,939 μs7,595 μs1.3x lower
Max Latency10,314 μs893,415 μs87x lower
Latency CV34.42%521.07%15x more stable
Write Amp1.11x1.32x
Space Amp0.08x0.13x
DB Size90 MB140 MB

The throughput advantage narrows from 3.95x to 1.61x under random writes, which is expected. The latency stability story remains consistent - TidesDB avoids the long-tail latency spikes that plague RocksDB under write pressure.

Read Performance

Read performance varies significantly by access pattern. TidesDB dominates on iteration, seeks, and hot-key workloads, while showing competitive performance on uniform random point reads.

Random Point Reads (10M ops)

MetricTidesDB v7.4.0RocksDB v10.9.1Winner
GET Throughput1,005,624 ops/sec1,600,183 ops/secRocksDB (throughput)
ITER Throughput8,054,857 ops/sec5,663,800 ops/secTidesDB 1.42x
GET p50 Latency3.00 μs4.00 μsTidesDB 1.33x lower
GET p99 Latency7.00 μs12.00 μsTidesDB 1.71x lower

Key insight

While RocksDB achieves higher GET throughput, TidesDB delivers better latency at every percentile. For latency-sensitive applications, TidesDB’s lower p50 (3μs vs 4μs) and p99 (7μs vs 12μs) matter more than raw throughput. TidesDB’s iteration is also 1.42x faster.

Seek Performance

Seek operations position an iterator at a specific key—critical for range queries and prefix scans.

PatternTidesDB ops/secRocksDB ops/secRatio
Random1,288,318890,8201.45x faster
Sequential3,926,9771,867,3752.10x faster
Zipfian (hot keys)3,336,501643,1075.19x faster

TidesDB’s seek performance is dramatically better, especially for Zipfian patterns where hot keys benefit from caching.

Range Query Performance

Range queries scan multiple consecutive keys.

Range SizeTidesDB ops/secRocksDB ops/secRatio
100 keys (random)345,330294,0951.17x faster
1000 keys (random)51,01244,4601.15x faster
100 keys (sequential)512,370408,8641.25x faster

TidesDB maintains a consistent advantage on range queries across different sizes and patterns.

Mixed Workload (50/50 Read/Write)

Real workloads rarely do pure reads or pure writes. This test interleaves both operations.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
PUT Throughput2,833,870 ops/sec2,077,171 ops/sec1.36x faster
GET Throughput1,603,626 ops/sec1,570,407 ops/sec1.02x faster
PUT Avg Latency2,551 μs3,847 μs1.5x lower
PUT p99 Latency4,827 μs5,148 μs
PUT Max Latency6,334 μs62,044 μs9.8x lower
PUT CV29.79%57.23%
Write Amp1.09x1.25x
Space Amp0.08x0.14x
DB Size44 MB79 MB

Under mixed load, TidesDB maintains its write advantage while matching RocksDB on reads. The max latency difference (6ms vs 62ms) matters for applications with SLA requirements.

Zipfian (Hot Key) Workload

Zipfian distribution simulates real-world access patterns where some keys are accessed far more frequently than others.

Zipfian Writes

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput3,142,460 ops/sec1,551,264 ops/sec2.03x faster
Avg Latency2,326 μs5,152 μs2.2x lower
p99 Latency4,197 μs8,028 μs1.9x lower
Write Amp1.04x1.24x
Space Amp0.02x0.11x
DB Size10 MB62 MB6x smaller

The space amplification difference is dramatic here. With hot keys, TidesDB’s compaction strategy results in a 10 MB database vs RocksDB’s 62 MB - a 6x difference.

Zipfian Mixed

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
PUT Throughput2,995,513 ops/sec1,632,148 ops/sec1.84x faster
GET Throughput3,161,078 ops/sec1,832,908 ops/sec1.72x faster
ITER Throughput3,950,385 ops/sec2,107,646 ops/sec1.87x faster
GET Avg Latency1.84 μs3.75 μs2x lower
GET p99 Latency4.00 μs10.00 μs2.5x lower

TidesDB excels on hot-key workloads across all operations. The read performance advantage here (vs the disadvantage on uniform random reads) TidesDB’s caching is effective for skewed access patterns.

Large Value Performance (4KB values)

Larger values stress different parts of the system—more I/O bandwidth, different compression ratios, and different memory pressure.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput368,453 ops/sec122,519 ops/sec3.01x faster
Avg Latency21,360 μs65,208 μs3.1x lower
p99 Latency39,027 μs1,072,529 μs27x lower
Max Latency53,906 μs1,088,137 μs20x lower
Latency CV20.05%233.19%12x more stable
Write Amp1.03x1.22x
DB Size302 MB347 MB

The p99 latency difference is striking - 39ms vs 1,072ms. For large values, RocksDB’s compaction can cause multi-second stalls.

Small Value Performance (64B values, 50M ops)

Small values test metadata overhead and per-operation costs.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput1,995,834 ops/sec1,431,936 ops/sec1.39x faster
Avg Latency3,541 μs5,586 μs1.6x lower
Max Latency110,366 μs1,242,438 μs11x lower
Latency CV77.41%603.04%7.8x more stable
Write Amp1.17x1.48x
DB Size664 MB472 MB

Interestingly, TidesDB uses more space here (664 MB vs 472 MB) but achieves lower write amplification.

Batch Size Impact

Batch size significantly affects throughput. Here’s how both engines scale:

Batch SizeTidesDB ops/secRocksDB ops/secTidesDB Advantage
11,035,154872,5841.19x
102,850,3591,588,6741.79x
1003,477,3092,277,1671.53x
1,0002,775,2851,722,1451.61x
10,0001,871,1861,199,6711.56x

Both engines peak at batch size 100. TidesDB’s advantage is most pronounced at batch size 10 (1.79x). Very large batches (10,000) hurt both engines due to memory pressure and lock contention.

Delete Performance

Delete operations in LSM-trees write tombstones, which must later be compacted away.

MetricTidesDB v7.4.0RocksDB v10.9.1Ratio
Throughput3,023,002 ops/sec3,263,712 ops/sec0.93x (slightly slower)
Avg Latency2,385 μs2,449 μsSimilar
Write Amp0.18x0.28x

Delete performance is roughly equivalent, with RocksDB slightly faster on raw throughput but TidesDB showing lower write amplification.

mimalloc vs Regular Allocator

I ran the same benchmarks with TidesDB using mimalloc (report #1, -DTIDESDB_WITH_MIMALLOC=ON) and the standard system allocator (report #2, -DTIDESDB_WITH_MIMALLOC=OFF):

Workloadmimalloc (Report #1)Regular Allocator (Report #2)Difference
Sequential Write6,365,356 ops/sec7,115,164 ops/secRegular +11.8%
Random Write2,255,283 ops/sec2,522,416 ops/secRegular +11.8%
Mixed PUT2,655,514 ops/sec2,833,870 ops/secRegular +6.7%
Mixed GET1,478,610 ops/sec1,603,626 ops/secRegular +8.5%
Zipfian PUT3,050,739 ops/sec3,142,460 ops/secRegular +3.0%
Zipfian GET3,042,708 ops/sec3,161,078 ops/secRegular +3.9%
Large Value (4KB)323,680 ops/sec368,453 ops/secRegular +13.8%
Small Value (64B)1,827,301 ops/sec1,995,834 ops/secRegular +9.2%
Delete2,871,484 ops/sec3,023,002 ops/secRegular +5.3%

The regular allocator shows higher numbers in this run. This could be due to system warm-up effects, caching, background processes, or other environmental factors between runs. The difference is likely not significant enough to draw conclusions about allocator performance without more controlled testing on an isolated system say.

Key takeaways

  • TidesDB remains stable with both allocators
  • Performance is consistent between runs with minor variations (5-14%)
  • TidesDB shows no stability issues with either allocator (unlike RocksDB which crashed with jemalloc)

Summary

TidesDB v7.4.0 Advantages

Write Performance

  • Sequential writes · 3.95x faster
  • Random writes · 1.61x faster
  • Large value writes · 3.01x faster
  • Write latency CV · 10-27x more stable
  • Max write latency · 20-100x lower

Read Performance

  • Iteration · 1.42x faster
  • Seek operations · 1.45-5.19x faster
  • Range queries · 1.15-1.25x faster
  • Hot-key GETs · 1.72x faster
  • GET p50/p99 latency · 1.3-1.7x lower (even when throughput is similar)

Efficiency

  • Space amplification · 0.08-0.10x vs 0.13-0.19x
  • Write amplification · Consistently lower

RocksDB v10.9.1 Advantages

  • Uniform random GET throughput · 1.59x higher ops/sec (but with higher latency percentiles)
  • Mature ecosystem · Years of production hardening

Stability Note

During our benchmarking, RocksDB experienced crashes when using jemalloc as the allocator. This is not an isolated incident - in previous benchmark runs, RocksDB also crashed with ASAN (AddressSanitizer) enabled and even with the standard system allocator. TidesDB completed all benchmark runs without any crashes or stability issues across all allocator configurations. This is common through my benchmarking experience with RocksDB.

TidesDB v7.4.0 demonstrates strong performance across both write and read workloads compared to RocksDB v10.9.1.

Key findings:

  • Writes · TidesDB is 1.6-4x faster with dramatically more stable latencies
  • Reads · TidesDB wins on iteration (1.42x), seeks (up to 5.19x), range queries (1.15-1.25x), and hot-key GETs (1.72x). Even on uniform random GETs where RocksDB has higher throughput, TidesDB delivers better p50/p99 latencies
  • Hot-key workloads · TidesDB dominates across all operations (1.7-5x faster)
  • Efficiency · Consistently lower space and write amplification

For most workloads especially those with any write component, scan operations, or skewed access patterns - TidesDB offers advantages in both throughput and latency predictability.

Join the TidesDB Discord for more updates and discussions at https://discord.gg/tWEmjR66cy