> > On closer inspection I think it's mostly okay because the session object > is supposed to go out of scope and get destroyed (and thus untie) after > every interaction, and the locking should mean that only process is > writing during that time. You could run into trouble though if a > process is able to open for reading and then later do writes without > closing and re-opening the file first, and it looks like that does > happen. This could lead to lost updates from other processes. > > I'd suggest either looking at how MLDBM::Sync works, or using BerkeleyDB > instead. BerkeleyDB can be tricky to get right, but it's significantly > faster than any other storage mechanism supported by Apache::Session. > > The corruption problems in this thread could also have been caused by > the usual suspect -- scoping issues. If the session object doesn't get > destroyed, it will keep the file open, and that is definitely not safe. > > - Perrin
Been doing some more testing on this. Previously I was untying the sessions when the handler was finished. But it seems that all it takes is just one child to exit uncleanly where I cant' catch the error and untie the sesssions, and that's all it takes to cause corruption. If I untie the session variables at the very start of the handler everything works fine. As far as performance, I would say any db file implementation would be slower if you are using lock files, because you can't have concurrent access on the same file. The file store is about 8 times as fast as DB_File in my setup. BerkeleyDB is fast, but in concurrent data store mode which is the easiest to use when you want concurrent access, you still have to write to disk on every access to flush the memory cache to disk or other processes won't see the latest writes to the database. At least that has been my limited experience anyways. Chris -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html