<post type=nerdy optional=yes> Cache maintenance between multiple threads is a tricky business. Consider a simple situation, a cache which contains items read from disk. The basic thing we do with this cache is search for an item, and if not found we read the item and put it into the cache. With one thread this is dirt simple:
With more than one thread, things go from simple to not simple. Now the cache must be protected by a gate to serialize access (gates are also known as a “semaphores”).
Okay, so the basic logic above now becomes (try 1):
Does this look right? Well, if we leave the cache gated while reading from disk, we force all cache users to block on the disk read. Not good. So how about this (try 2):
Better, right? This way we will only serialize cache access, not disk access. That is probably the main reason we have multiple threads, so this is good. But we do have a problem, what if two threads concurrently want the same item? We could have the following timing:
Depending on the application, this could happen anywhere from "never" to "always". If items are accessed more or less randomly "never" is probably a good approximation. But if items are accessed more or less in sequence "always" is probably close. For this case is there anything better we can do? The crux of the problem is that thread2 doesn't know thread1 is already reading the item. If it did, it could simply wait, then retrieve it from the cache, and life would be good. So suppose we use this logic (try 3):
Of course the "in progress" token adds complexity. But now the scenario above becomes:
Much better… A more complicated solution still would be to replace the delay with some kind of event. In actual practice a simple delay and retry is probably sufficient. </post> |
|