Re: Storable (lock_store lock_retrieve) question

2001-04-01 Thread Mike Cameron



Perrin Harkins wrote:

> > I am currently using Storables lock_store and lock_retrieve to maintain
> > a persistent data structure.  I use a session_id as a key and each data
> > structure has a last modified time that I use to expire it.  I was under
> > the impression that these two functions would be safe for concurrent
> > access, but I seem to be getting 'bad hashes' returned after there is an
> > attempt at concurrent access to the storable file.
>
> (You're not using NFS, right?)

Correct

>
>
> What are the specifics of your bad hashes?  Are they actually corrupted, or
> do they just contain data that's different from what you expected?

The error(which i neglected to post, oops!) is
--begin error message---
[error] Bad hash at blib/lib/Storable.pm (autosplit into
blib/lib/auto/Storable/_retrieve.al) line 275 at User.pm line 115

--end error message---

Where line 115 of User.pm is:
my $hr=lock_retreive($file);

> The
> lock_retrieve function only locks the file while it is being read, so
> there's nothing to stop a different process from running in and updating the
> file while you are manipulating the data in memory.  Then you store it and
> overwrite whatever updates other processes may have done.  If you want to
> avoid this, you'll need to do your own locking, or just use Apache::Session.
> - Perrin




Re: Storable (lock_store lock_retrieve) question

2001-04-01 Thread Perrin Harkins

> I am currently using Storables lock_store and lock_retrieve to maintain
> a persistent data structure.  I use a session_id as a key and each data
> structure has a last modified time that I use to expire it.  I was under
> the impression that these two functions would be safe for concurrent
> access, but I seem to be getting 'bad hashes' returned after there is an
> attempt at concurrent access to the storable file.

(You're not using NFS, right?)

What are the specifics of your bad hashes?  Are they actually corrupted, or
do they just contain data that's different from what you expected?  The
lock_retrieve function only locks the file while it is being read, so
there's nothing to stop a different process from running in and updating the
file while you are manipulating the data in memory.  Then you store it and
overwrite whatever updates other processes may have done.  If you want to
avoid this, you'll need to do your own locking, or just use Apache::Session.
- Perrin




Storable (lock_store lock_retrieve) question

2001-04-01 Thread Mike Cameron

I am currently using Storables lock_store and lock_retrieve to maintain
a persistent data structure.  I use a session_id as a key and each data
structure has a last modified time that I use to expire it.  I was under
the impression that these two functions would be safe for concurrent
access, but I seem to be getting 'bad hashes' returned after there is an
attempt at concurrent access to the storable file.  Any pointers would
be appreciated.  Below is a simplified example of what I am doing.

my $hr=lock_retrieve("my_file");
for(keys %{$hr}){
delete $hr->{$_} if ($hr->{$_}->{time} < time- $session_length)
}
# do stuff here
$hr->{sid}->{other_stuff}=$whatever;

# restore file
lock_store($hr,"my_file");

P.S. Thanks to all who contribute regularly on this list.  I find it
most enlightening