Insights on Database Caching Approaches

Every dynamic application comprises data that is stored in the database. When the server receives the client request, it processes the request and fetches the data from the database which in turn is sent to the client. An application using a disk-based database can pose performance challenges in spite of using query optimization techniques or schema design enhancements.

To ensure the application to be scalable and to have low latency, the in-memory cache is used. The main purpose of cache is to reduce the time needed to access data. Storing the frequently accessed data in cache reduces the number of database operations.

Types of Database Caches

Integrated Caches:

An integrated cache is a database built-in capability and is managed within the database(Example — Amazon Aurora). When there is a change in the data, the database automatically updates the cache. The downside of this type is the size allocated by the database to the cache and the cache data cannot be shared with other instances.

Local caches:

A local cache stores the frequently used data within the application. This makes data retrieval faster than other caching architectures as it removes network traffic in retrieving data.

The major challenge in the local cache is, data sharing becomes critical in distributed environments. For example, if the application uses multiple servers, coordinating the values across these servers becomes a major challenge, if each server owns its own cache. Also, when outages occur, the data in the local cache is lost.

Remote caches:

A remote cache has separate instance(s) for storing the cached data in-memory on dedicated servers and are built on key/value stores, such as Redis and Memcached. This is ideal for distributed environments as they work as a connected cluster.

Caching Patterns

There are two common approaches that can be used while caching the database. They are cache-aside or lazy loading (a reactive approach) and write-through (a proactive approach). The approach should be selected based on the application objectives.

Cache-Aside (Lazy Loading) Approach

The cache-Aside approach updates the cache after the data is requested from the database. The data retrieval follows the below process

  • When an application needs to fetch the data from the database, it first checks for the data in the cache.
  • If the cache has the data, the cached data is returned to the application. This avoids the unnecessary request being made to the database.
  • If cache misses the data, the database is queried for the data. Then the cache is populated with the data retrieved from the database and the data is returned to the caller.

Advantages:

  • The cache contains only the data that the application requests. This keeps the cache size cost-effective.
  • This approach has immediate performance gains.

Disadvantages:

  • As the data is loaded into cache only when data is missed in the cache, there will be an overhead in the initial response time for the first request.
  • The occurrence of stale data. As the cache is updated only when there is a cache-miss, the data in the cache becomes stale. This can be overcome using Write-through or Time To Live(TTL) strategies.

Write-through Approach:

This is a proactive approach where the data is updated in the cache whenever the database is updated. The data retrieval happens as below.

  • The backend process or the application updates the database
  • The cache is updated immediately with the latest data.

Advantages:

  • The data in the cache will never be stale, as the cache is updated every time the data is updated.
  • The database read operations are minimized thus improving the performance.

Disadvantages:

  • As all the data are updated in the cache, the infrequently requested data are also written into cache resulting in cache churn and making the cache larger and more expensive.

Time to Live (TTL)

To avoid stale data and to achieve optimum cache size, cache expiration or Time to Live can be set. TTL specifies the number of seconds/milliseconds until a particular key/value in the cache expires. When the application tries to access the expired value from the cache, the database is queried for the key and the cache is updated. This maintains the cache to be refreshed occasionally.

Adding some time jitter to the expiration time is important to reduce the possibility of database load when all the data expires. This will reduce the pressure on the database as well as lowers the CPU usage on the cache engine.

Conclusion

Before deciding the caching pattern, it is important to analyze the frequency of the underlying data update. It is also important to evaluate the risk of outdated data returned to the application. For example, the TTL can be set higher to static data as there is less risk of outdated data. But while caching the dynamic data, lower TTL has to be set to reduce the risk of outdated data. Applying appropriate TTL to the cached keys will give a huge performance boost.

Author: Nandhini Meenal BL

Your Digital Transformation partner. We are here to share knowledge on varied technologies, updates; and to stay in touch with the tech-space.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store