L1L2RedisCache is a layered caching implementation of .NET Core's IDistributedCache.

If you're either casually interested in my deranged ravings or mildly aware of Redis and/or ASP.NET. The concept behind L1L2RedisCache is a caching solution which utilizes a level 1 memory cache and a level 2 Redis cache. So why use both?

We can't just cache everything in memory. It does not lend itself to horizontal scaling well. What happens if two instances of an application need to mirror the same cache? Or two hundred? They can't all feasibly maintain the same data in memory. Most caching solutions will defer to simply storing data in a distributed cache used by all instances, such as Redis.

However, memory caches have extreme performance benefits over remote caches. Using Systems Performance Enterprise and the Cloud values scaled to human time, memory caches are able to retrieve data in minutes. Under the same scale, getting something from a Redis cache could take up to a few years.

What if we could combine the performance benefits of memory caches with the horizontal scalability of distributed caches?

We'd immediately stumble into a notoriously difficult computing problem: There are only two hard things in Computer Science: cache invalidation and naming things. (Phil Karlton).

Luckily the concept of solving this within the context of Redis was someone else's problem. Using memory as a level 1 cache and Redis as level 2 is not a new concept, and has been popularized by much smarter people at Stack Overflow through the power of Redis Pub/Sub.

L1L2RedisCache attempts to be a generalized, accessible version. By implementing against ASP.NET Core's IDistributedCache, it is simply an adaptable, interchangeable abstraction that will net better performance in most circumstances.