Mitch Crane wrote:
By doing so you will end up talking to your block device _every_ time you
try to look up an item in the cache. You might want to consider keeping the
data structures in memory instead...
well, mmap'ed pages are buffered by linux using the page cache, it is
possible the cache lookup will share a page that's already resident in
memory and hasn't been pdflushed. worst case, if the page is not in
the page cache, we will have some latency as those blocks are fetched
from the device and the missing page is populated. my device has tens
of microseconds latency for accesses (page cache misses) but that
should still be an order of magnitude or 2 faster than the application
having to go to disk, right?
I'm not sure the page cache will help you a lot unless you reference
just a small portion of your cache because this is how it works:
you send in a key, and the server will generate a hash value from the
key. It will then jump into the gigant hash map (you'll have this in the
memory), and there it will find the pointer to a linked list (or null if
no items with a key that hash to this value exists).
All items in this linked list will be stored in the section you mapped
from the device, and due to the fact that it is a linked list of items
with the same _hash_ value, they may _not_ be stored in the same _slab_
(different sizes) and would therefore be on different memory pages on
disk...
makes sense?
Cheers,
Trond