logo

Redis: Single Thread vs Multithreading

November 3, 2025

When we chase high throughput, our instinct is to pile on more: more servers, more databases, vertical scaling, more threads. The conventional wisdom is simple more workers should equal more performance. But what if you stick to just one worker thread? You'd likely assume you need to go multithreaded to squeeze out maximum performance.

Yet Redis, one of the fastest in-memory databases in the world, runs on a single thread. How can a single thread outperform systems running dozens of threads across multiple cores?

The answer lies not in doing more things in parallel, but in removing everything that slows execution down.

The Single-Thread Advantage

Redis's single-threaded design eliminates the three biggest performance killers in multithreaded systems:

1. No Context Switching

Multithreaded system:
Thread A executes  OS interrupt  save state  load Thread B  execute
Cost: ~1-10 microseconds per context switch

Single-threaded Redis:
Continuous execution  no interruptions  no state saving/loading
Cost: 0 microseconds

2. No Locks or Synchronization

Multithreaded system:
acquire_lock()  process()  release_lock()
Cost: Lock contention, memory barriers, cache invalidation

Single-threaded Redis:
Direct memory access  immediate execution
Cost: 0 synchronization overhead

3. No Cache Contention

Multithreaded system:
Thread 1 modifies data  invalidates CPU cache  Thread 2 cache miss
Result: Memory fetch penalty (~100-300 cycles)

Single-threaded Redis:
Single thread  consistent cache state  optimal cache utilization
Result: Data stays hot in CPU cache

Redis (Single Thread)

Operations per second: ~100,000-500,000
Latency: 0.1-1 milliseconds
CPU utilization: 100% of one core (no waste)
Memory access pattern: Predictable, cache-friendly

Multithreaded Database (Shared Memory)

Operations per second: ~50,000-200,000
Latency: 1-5 milliseconds
CPU utilization: 60-80% across multiple cores (synchronization overhead)
Memory access pattern: Unpredictable, cache-unfriendly

The single thread wins because it eliminates all synchronization costs.

When Multithreading Can Win: Shared-Nothing Architecture

There is one scenario where multithreading can outperform Redis: shared-nothing architecture.

Shared-Nothing Design

Thread 1: Owns keys A-F, shard 1 of data
Thread 2: Owns keys G-M, shard 2 of data
Thread 3: Owns keys N-S, shard 3 of data
Thread 4: Owns keys T-Z, shard 4 of data

No shared memory  No locks  No contention

How It Works

Client request: GET user:john_doe

1. Hash "user:john_doe"  determines shard 2
2. Route request to Thread 2
3. Thread 2 processes entirely in isolation
4. Return response

No synchronization needed because Thread 2 owns that data exclusively

The Trade-offs

Single-Threaded (Redis) Advantages

  • Simplicity: No concurrency bugs, easier to reason about
  • Predictable performance: No lock contention variability
  • Cache efficiency: Data stays hot in CPU cache
  • Low latency: No synchronization delays

Single-Threaded Limitations

  • CPU bound: Can only use one CPU core
  • Blocking operations: Any slow operation blocks everything
  • Scalability ceiling: Limited by single core performance

Shared-Nothing Multithreading Advantages

  • Linear scaling: Performance grows with CPU cores
  • Fault isolation: One thread's problems don't affect others
  • Higher throughput: Can process more operations per second

Shared-Nothing Limitations

  • Complexity: Harder to implement and debug
  • Data distribution: Need smart sharding strategies
  • Cross-shard operations: Require coordination and are expensive

Lessons for System Design

The Redis story teaches us several crucial lessons:

    1. Identify Your Bottleneck
    1. Synchronization is Expensive
    1. Architecture Matters More Than Core Count

Conclusion

Redis proves that sometimes less is more. By eliminating the overhead of multithreading context switching, locks, and cache contention a single thread can outperform systems with dozens of threads.

The key insights are:

  1. Synchronization overhead often exceeds the benefits of parallelism
  2. Event-driven architecture can handle massive concurrency with one thread
  3. Shared-nothing designs can combine the benefits of both approaches
  4. Cache efficiency matters more than raw CPU count
  5. Simple architectures are often faster and more reliable

When designing high-performance systems, don't automatically reach for more threads. Instead, ask: "What's actually slowing us down?" Often, the answer isn't lack of parallelism it's the cost of coordination.

Redis shows us that the fastest way to do something might just be to remove everything that makes it slow.

References

Redis Architecture and Internals