Ok, that makes sense to me now. I see how you could lose some data.

Do you have any opinions on what a good algorithm might be for getting
locks without the potential for overwriting data? I suppose I could
create a second "lock file" which I would open using "r" - then, wait
for a lock on the lock file and then go on with any processing on the
actual data file. After the processing is done release the lock on the
lock file. Perhaps there are better ways to do this.

What do you think?

Matt Friedman
Web Applications Developer
www.SpryNewMedia.com
 

-----Original Message-----
From: Jim Winstead [mailto:[EMAIL PROTECTED]] 
Sent: Sunday April 7, 2002 8:27 PM
To: [EMAIL PROTECTED]
Subject: Re: FW: [PHP] - REPOST - Flock manual clarification please ;-)

Matt Friedman <[EMAIL PROTECTED]> wrote:
> In regards to the <snip> above, under what circumstances might you
have
> to create a separate lock file? Is this an OS issue? Is it an issue
when
> concurrency is high? The manual says "you may have to"; I am looking
for
> some clarification as to when exactly you "may have to" follow the
> <snip> advice.

when you do an fopen("file","w"), it truncates the file -- before you
can call flock(). so if one process locks the file, and starts writing
data, a second one could just come along and blow away all or part of
the data even though the first still has it locked. by the time the
second process calls flock() and notices that the first has it locked,
it has already truncated the file.

jim

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to