Caching

Types of caching strategies:

From <https://msdn.microsoft.com/en-us/magazine/hh708748.aspx>

 

Usecase for caching:

The content is expensive to produce. Fetching a web page and rendering it can take a few seconds each time.

The content can’t be precomputed. The number of web pages is practically infinite, and there’s no way to predict which ones will be requested (and when).

The content changes relatively slowly. Most web pages don’t change second by second, so it’s okay to serve up an old thumbnail. (In my app, I chose five minutes as the threshold for how old was acceptable.)

 

 

Types of Caching:

Data caching-If some dataset has to be repeated in multiple views,

 

Output caching-If a view has output generated in a complicated way, cache that output. It is the content returned by a controller action. By setting Location attribute, we can define if it has to be cached in server/client/any/downstream. VaryByParam will determine if content has to be varied according to a key.

https://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/improving-performance-with-output-caching-cs

Javascript tools used for better caching:

MeteorJS, Firebase, asana Luna.

Resource Caching

In-Memory Caching

Distributed Cache:

Factors to check for distributed caching:

  1. Temporal
  2. Spatial
  3. Transactional
  4. Cache Replacement(LRU,MRU,LFU)
  5. Primed vs Demand
  6. Shared(processed or clustered
  7. Caching size(Pareto principle, that is 80-20 principle)

 

How to update data which is already caches

What is Etag in HTTP?

 

Using the CachingHandler in ASP.NET Web API:

http://byterot.blogspot.in/2012/06/aspnet-web-api-caching-handler.html

https://github.com/sajints/CacheCow

Distributed Caching:

https://vimeo.com/album/2683752/video/83758187

Cache-Aside Indirection

 

What  is distributed caching?

https://msdn.microsoft.com/en-us/library/dd129907.aspx

 

Redis eviction policies:

Maxmemory-policy configuration value is driving the eviction from memory. Redis checks and based on this property value, memory will be evicted.

  1. Noeviction – return error when maximum memory is reached
  2. Allkeys-lru – LRU keys will be removed first
  3. Volatile-lru -LRU keys will be removed first, only for those keys which has expiry set
  4. Allkeys-random – evict random keys
  5. Volatile-random – evict random keys, which has expiry set
  6. Volatile-ttl – evict based on TTL

 

Amazon ElastiCache

ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines:

  • Memcached – a widely adopted memory object caching system. ElastiCache is protocol compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
  • Redis – a popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master / Slave replication and Multi-AZ which can be used to achieve cross AZ redundancy.

Amazon ElastiCache automatically detects and replaces failed nodes, reducing the overhead associated with self-managed infrastructures and provides a resilient system that mitigates the risk of overloaded databases, which slow website and application load times. Through integration with Amazon CloudWatch, Amazon ElastiCache provides enhanced visibility into key performance metrics associated with your Memcached or Redis nodes.

Using Amazon ElastiCache, you can add an in-memory caching layer to your infrastructure in a matter of minutes by using the AWS Management Console.

 

From <https://aws.amazon.com/elasticache/>

 

What is cache-aside pattern?

 

https://msdn.microsoft.com/en-us/library/dn589799.aspx

Cache-Aside Pattern

Load data on demand into a cache from a data store. This pattern can improve performance and also helps to maintain consistency between data held in the cache and the data in the underlying data store.

Context and Problem

Applications use a cache to optimize repeated access to information held in a data store. However, it is usually impractical to expect that cached data will always be completely consistent with the data in the data store. Applications should implement a strategy that helps to ensure that the data in the cache is up to date as far as possible, but can also detect and handle situations that arise when the data in the cache has become stale.

 

Solution

Many commercial caching systems provide read-through and write-through/write-behind operations. In these systems, an application retrieves data by referencing the cache. If the data is not in the cache, it is transparently retrieved from the data store and added to the cache. Any modifications to data held in the cache are automatically written back to the data store as well.

For caches that do not provide this functionality, it is the responsibility of the applications that use the cache to maintain the data in the cache.

An application can emulate the functionality of read-through caching by implementing the cache-aside strategy. This strategy effectively loads data into the cache on demand. Figure 1 summarizes the steps in this process.

Figure 1 – Using the Cache-Aside pattern to store data in the cache

If an application updates information, it can emulate the write-through strategy as follows:

  1. Make the modification to the data store
  2. Invalidate the corresponding item in the cache.

When the item is next required, using the cache-aside strategy will cause the updated data to be retrieved from the data store and added back into the cache.

 

From <https://msdn.microsoft.com/en-us/library/dn589799.aspx>

 

 

 

 

Leave a comment