Perrin Harkins wrote:

> > I am currently using Storables lock_store and lock_retrieve to maintain
> > a persistent data structure.  I use a session_id as a key and each data
> > structure has a last modified time that I use to expire it.  I was under
> > the impression that these two functions would be safe for concurrent
> > access, but I seem to be getting 'bad hashes' returned after there is an
> > attempt at concurrent access to the storable file.
>
> (You're not using NFS, right?)

Correct

>
>
> What are the specifics of your bad hashes?  Are they actually corrupted, or
> do they just contain data that's different from what you expected?

The error(which i neglected to post, oops!) is
--begin error message---
[error] Bad hash at blib/lib/Storable.pm (autosplit into
blib/lib/auto/Storable/_retrieve.al) line 275 at User.pm line 115

--end error message---

Where line 115 of User.pm is:
my $hr=lock_retreive($file);

> The
> lock_retrieve function only locks the file while it is being read, so
> there's nothing to stop a different process from running in and updating the
> file while you are manipulating the data in memory.  Then you store it and
> overwrite whatever updates other processes may have done.  If you want to
> avoid this, you'll need to do your own locking, or just use Apache::Session.
> - Perrin

Reply via email to