Thanks Perrin - I have a translator-type application where the total store size of %CACHE key=values is in the low-millions.
When the translation happens on a random, ad hoc string of sometimes thousands of "words", the process simply does something like this: Untranslated: hfj kei hty ... jan oej wio Translated: $CACHE{hfj} $CACHE{kei} $CACHE{hty} ... $CACHE{jan} $CACHE{oej} $CACHE{wio} This is easy if I simply load the translation key=values once (from a RDBMS). Although the hash itself is at least a few or more MB's in size, this works perfectly rather than being forced to do a RDBMS lookup for every single word (or a split-out large word list) to complete each string. The only hang-up is that periodically (every few translation cycles), some individual key values may change. Right now, the child processes just hold their breath and reload %CACHE whenever this happens. Furthermore, the volume of these translations are rising (thousands per hour) which is why I want to just globalize CACHE and provide a way to update keys perhaps within a separate mechanism. Thanks again On 9/5/07, Perrin Harkins <[EMAIL PROTECTED]> wrote: > > On 9/6/07, David Willams <[EMAIL PROTECTED]> wrote: > > Child processes cannot update %CACHE, so what other apache methods or > > architectural strategies exist (creative, elaborate, etc) or have been > used > > to update a similar hash? > > You'll find many discussions about sharing data in the list archives > and in the mod_perl books. Some basic options: > > - RDBMS > - Cache::FastMmap > - BerkeleyDB > - Cache::Memcached > > > Unfortunately, a database will not work for my current problem. > > If you could explain what your requirements are, we might be able to > make better suggestions. If the issue is speed, BerkelyDB and > Cache::FastMmap are both significantly faster than an RDBMS. > > - Perrin >