Codacoda
Back to Academy

system design

Distributed Cache

Design a distributed caching system that stores key-value pairs across multiple nodes for fast access. The system uses consistent hashing to determine which node stores each key, minimizing redistribution when nodes are added or removed. Each node implements an LRU (Least Recently Used) eviction policy to manage memory, and entries support TTL (Time-To-Live) expiration for automatic cleanup. Key trade-offs include cache coherence across nodes, hot-key distribution causing uneven load, and the balance between cache hit ratio and memory consumption.

Use Cases

  • Database query result caching to reduce read latency
  • Session storage for horizontally scaled web applications
  • Content delivery network edge caching for static assets
  • Rate limiting counters distributed across API gateway nodes

Visualization

OVERVIEWClientApplicationCache Node 1Hash Range: 0-120LRU EvictionCache Node 2Hash Range: 121-240LRU EvictionCache Node 3Hash Range: 241-360LRU EvictionDatabaseSource of Truth
Speed:1x
Distributed Cache: data is partitioned across multiple cache nodes using consistent hashing. Backed by a database as the source of truth.Step 1 / 7

Implementation

Output

Click "Run Code" to see output...