For now it's not a distributed system, and I have been using Cache::FileCache. But that still means freezing and thawing objects - which I'm trying to minimise.
Other things (IPC::MM, MLDBM::Sync, Cache::Mmap, BerkeleyDB) are significantly faster than Cache::FileCache. If you have tons of free memory, then go ahead and cache things in memory. My feeling is that the very small amount of time that the fastest of these systems use to freeze and thaw is totally made up for in the huge memory savings which allows you to run more server processes.
When you say that Cache::Mmap is only limited by the size of your disk, is that because the file in memory gets written to disk as part of VM? ( I don't see any other mention of files in the docs.) Which presumably means resizing your VM to make space for the cache?
That's right, it uses your system's mmap() call. I've never needed to adjust the amount of VM I have because of memory-mapping a file, but I suppose it could happen. This would be a good question for the author of the module, or an expert on your system's mmap() implementation.
I see the author of IPC::MM has an e-toys address - was this something you used at e-toys?
It was used at one point, although not in the version of the system that I wrote about. He originally wrote it as a wrapper around the mm library, and I asked if he could put in a shared hash just for fun. It turned out be very fast, largely because the sharing and the hash (or btree) is implemented in C. The Perl part is just an interface to it.
I know very little about shared memory segments, but is MM used to share small data objects, rather than to keep large caches in shared memory?
It's a shared hash. You can put whatever you want into it. Apache uses mm to share data between processes.
Ralph Engelschall writes in the MM documentation :
"The maximum size of a continuous shared memory segment one can allocate depends on the underlaying platform. This cannot be changed, of course. But currently the high-level malloc(3)-style API just uses a single shared memory segment as the underlaying data structure for an MM object which means that the maximum amount of memory an MM object represents also depends on the platform."
What implications does this have on the size of the cache that can be created with IPC::MM
It varies by platform, but I believe that on Linux it means each individual hash is limited to 64MB. So maybe I spoke too soon about having unlimited storage, but you should be able to have as many hashes as you want.
If you're seriously concerned about storage limits like these, you could use one of the other options like MLDBM::Sync or BerkeleyDB which use disk-storage.
- Perrin