Back to Academy
system design
Distributed Cache
Design a distributed caching system that stores key-value pairs across multiple nodes for fast access. The system uses consistent hashing to determine which node stores each key, minimizing redistribution when nodes are added or removed. Each node implements an LRU (Least Recently Used) eviction policy to manage memory, and entries support TTL (Time-To-Live) expiration for automatic cleanup. Key trade-offs include cache coherence across nodes, hot-key distribution causing uneven load, and the balance between cache hit ratio and memory consumption.
Use Cases
- •Database query result caching to reduce read latency
- •Session storage for horizontally scaled web applications
- •Content delivery network edge caching for static assets
- •Rate limiting counters distributed across API gateway nodes
Visualization
Implementation
Output
Click "Run Code" to see output...