If you've made it past that first sentence, you're either casually interested in my deranged ravings or mildly aware of Redis and/or .NET Core. The concept behind L1L2RedisCache is a caching solution which utilizes a level 1 memory cache and a level 2 Redis cache. Why use both?
To start, we can't just cache everything in memory. What happens if two computers need to mirror the same cache? Or two hundred? They can't all feasibly maintain the same data in memory. Most caching solutions will defer to simply storing data in a distributed cache, such as Redis.
However, memory caches have extreme performance benefits over Redis caches. Using Systems Performance Enterprise and the Cloud values scaled to human time, memory caches are able to retrieve data in minutes. Under the same scale, getting something from a Redis cache could take up to a few years.
What if we could combine the performance benefits of memory caches with the horizontal scalability of distributed caches?
We'd immediately stumble into an extremely difficult problem:
There are only two hard things in Computer Science: cache invalidation and naming things. (Phil Karlton).
Luckily the concept of solving this within the context of Redis was someone else's problem. Using memory as a level 1 cache and Redis as level 2 is not a new concept, and has been popularized by much smarter people at Stack Overflow through the power of Redis Pub/Sub.
My L1L2RedisCache humbly attempts to be a generalized, accessible version. By implementing against .NET Core's IDistributedCache, it is simply an easily adapted interchangeable abstraction that will net better performance.