Understanding caching

Understanding caching

Introduction to caching and its internals

Intro

In computing, a cache is a high-speed data storage layer that stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching helps is to efficiently reuse previously computed or derived states.

Caching as a concept can appear in a lot of different contexts such as caching application data in RAM, storing frequent request data in Redis in a server environment, and caching server state on the frontend.

But the primary objective is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.

Caching in different layers

Client-Side

Accelerate retrieval of web content from websites (browser or device). Achieved using HTTP Cache Headers, and Browsers.

DNS

Domain to IP Resolution. Achieved using DNS Servers.

Web

Accelerate retrieval of web content from web/app servers. Manage Web Sessions (server side). Achieved using HTTP Cache Headers, CDNs, Reverse Proxies, Web Accelerators, and Key/Value Stores.

App

Accelerate application performance and data access. Achieved using Key/Value data stores, Local caches.

Database

Reduce latency associated with database query requests. Achieved using Database buffers, Key/Value data stores.


Generally caching process works something like this in a web environment but the same applies to other contexts as well,

read through cache example

After receiving a request, a web server first checks if the cache has the available response. If it has, it sends data back to the client. If not, it queries the database, stores the response in the cache, and sends it back to the client.

The above-caching strategy is the most common one called a read-through cache.

Caching strategies

The caching strategy mostly depends on your data access patterns and data type.

For example, you can ask questions like

  • is the system write heavy and read less frequently? Logs can be a good example of this.

  • is data written once and read multiple times like a user profile?

  • is data return always unique? might be a search query result.

So it depends 😜...

Let's see some of the most common strategies

Cache-Aside

The cache sits on the side and the application directly talks to both the cache and the database. There is no connection between the cache and the primary data store.

cache aside strategy

Steps

  1. The application first checks the cache.

  2. If the data is found, we've got a cache hit. The data is returned

  3. If the data is not found in the cache, we've got a cache miss. The application has to do some extra work i.e. queries the primary data store, returns the data to the client, and stores the data in the cache for subsequent requests.

Pros & Cons

Works best for read-heavy workloads. Memcached and Redis are widely used. Using this approach makes your system resilient to cache failures. With this strategy most common write pattern is to write data to the database directly, which can lead to inconsistent cache, to tackle this we generally use TTL ( time to live ), but if you can't compromise data inconsistency you either have to invalidate cache manually or use a more sophisticated write strategy.

Read-Through Cache

Read-through cache sits in line with the database. When there is a cache miss, it loads missing data from the database, populates the cache, and returns it to the application.

read through cache strategy

Both cache-aside and read-through strategies load data lazily, only when it is first read

Pros & Cons

Read-through caches work best for read-heavy workloads. The downside is that on the first request, it always results in a cache miss and incurs the extra penalty of loading data to the cache, We can deal with it by warming or pre-heating the cache by manually running queries. Data consistency can still occur in this strategy.

Some key differences between read-through and cache-aside are

  1. In cache-aside, the application is responsible for fetching the data from the primary data store and populating the cache. In read-through, the logic is usually supported by the library or the cache provider.

  2. Unlike cache-aside, the data model in the read-through cache cannot be different than that of the database.

Write-Through Cache

As the name suggests, data is first written to the cache and then to the database. The Cache sits in line with the database and writes always go through the cache to the primary data store.

As a result, you have guaranteed data consistency 👌, but this can also become SPOF ( Single point of failure ) if there is a single cache node 😑

write through cache

The application writes the data directly to the cache. The cache updates the data in the primary data store when the write is complete, both the primary data store and cache are consistent.

Pros & Cons

This strategy introduces extra write latency for sure because data is first written to the cache and then to the database, but if we pair this with read-through caches, we get all the benefits of read-through plus the data consistency guarantee which means no manual cache invalidation.

Write-Back or Write-Behind

In this, the application writes data to the cache which stores the data and sends an acknowledgment to the application immediately. Then, the cache writes the data back to the database in batch.

It's very similar to write-through but with one crucial difference which is, In write-through, the data written to the cache is synchronously updated in the primary data store while in write-back this process is asynchronous.

From the application point of view, this strategy is faster.

write back cache strategy

Pros & Cons

Write-back caches improve the write performance and are good for write-heavy workloads. When combined with read-through, it works well for mixed workloads, where the most recently updated and accessed data is always available in the cache.

It’s resilient to database failures and can tolerate some database downtime.

So that were some of the caching strategies. There are more, but these are the most commonly used ones.

Considerations and common terminology

  • Expiration policy: It is a good practice to implement an expiration policy. Once cached data is expired, it is removed from the cache. When there is no expiration policy, cached data will be stored in the memory permanently. It is advisable not to make the expiration date too short as this will cause the system to reload data from the database too frequently. Meanwhile, it is advisable not to make the expiration date too long as the data can become stale.

  • Mitigating failures: A single cache node can become SPOF ( single point of failure ) depending on the caching strategy you opted for. So multiple cache servers across different data centers are recommended to avoid SPOF. Another recommended approach is to overprovision the required memory by certain percentages. This provides a buffer as memory usage increases.

  • Eviction policy: Once the cache is full, any requests to add items to the cache might cause existing items to be removed. This is called cache eviction. Least-recently-used (LRU) is the most popular cache eviction policy. Other eviction policies, such as the Least Frequently Used (LFU) or First in First Out (FIFO), can be adopted to satisfy different use cases.

I hope you have enjoyed this post. Until next time ✌️