Re: [PHP] File locking with PHP functions

2011-04-04 Thread Louis Huppenbauer
It may not be a direct answer to your question, but...
You could just use flock() to lock the file while accessing it.

louis

2011/4/4 Paul M Foster :
> I'd like to know (from someone who knows the internals more than I do)
> whether the following functions lock files and to what extent:
>
> fopen($filename, 'w');
>
> Does this function lock the file from writes until fclose()?
> Does it lock from reads as well?
>
> fopen($filename, 'r+');
>
> Does this function lock the file from writes until fclose()?
> Does it lock the file from reads as well?
>
> file($filename);
>
> Does this function lock the file from writes until finished?
> Does it lock the file from reads as well?
>
> All this is in the context of a Linux/Unix web server.
>
> Paul
>
> --
> Paul M. Foster
> http://noferblatz.com
> http://quillandmouse.com
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File locking with PHP functions

2011-04-04 Thread Stuart Dallas
On Monday, 4 April 2011 at 15:28, Paul M Foster wrote:
I'd like to know (from someone who knows the internals more than I do)
> whether the following functions lock files and to what extent:
> 
> fopen($filename, 'w');
> 
> Does this function lock the file from writes until fclose()?
> Does it lock from reads as well?
> 
> fopen($filename, 'r+');
> 
> Does this function lock the file from writes until fclose()?
> Does it lock the file from reads as well?
> 
> file($filename);
> 
> Does this function lock the file from writes until finished?
> Does it lock the file from reads as well?
> 
> All this is in the context of a Linux/Unix web server.

No, fopen does not lock the file. Check out http://php.net/flock but be sure to 
read all of that page because there are some gotchas with using it.

-Stuart

-- 
Stuart Dallas
3ft9 Ltd
http://3ft9.com/





-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking...

2009-03-01 Thread Stuart
2009/3/1 Robert Cummings 

> On Sun, 2009-03-01 at 10:05 -0800, bruce wrote:
> > hi rob...
> >
> > what you have written is similar to my initial approach... my question,
> and
> > the reason for posting this to a few different groups.. is to see if
> someone
> > has pointers/thoughts for something much quicker...
> >
> > this is going to handle processing requests from client apps to a
> > webservice.. the backend of the service has to quickly process the files
> in
> > the dir as fast as possible to return the data to the web client query...
>
> Then use a database to process who gets what. DB queries will queue up
> while a lock is in place so batches will occur on first come first
> served basis. I had thought this was for a background script. This will
> save your script from having to browse the filesystem files, sort by
> age, etc. Instead put an index on the ID of the file and grab the X
> lowest IDs.


A database would be the best way to do this, but I've need to handle this
situation with files in the past and this is the solution I came up with...

1) Get the next filename to process
2) Try to move it to /tmp/whatever.
3) Check to see if /tmp/whatever. exists, and if it does process it
then delete it or move it to an archive directory
4) Repeat until there are no files left to process

I have this running on a server that processes several million files a day
without any issues.

For database-based queues I use a similar system but the move is replaced by
an update which sets the pid field of a single row. I then do a select where
that pid is my pid and process whatever comes back. I have several queues
that use this system and combined they're handling 10's of millions of queue
items per day without any problems, with the advantage that I can scale
across servers as well as processes.

-Stuart

-- 
http://stut.net/


RE: [PHP] file locking...

2009-03-01 Thread Robert Cummings
On Sun, 2009-03-01 at 10:05 -0800, bruce wrote:
> hi rob...
> 
> what you have written is similar to my initial approach... my question, and
> the reason for posting this to a few different groups.. is to see if someone
> has pointers/thoughts for something much quicker...
> 
> this is going to handle processing requests from client apps to a
> webservice.. the backend of the service has to quickly process the files in
> the dir as fast as possible to return the data to the web client query...

Then use a database to process who gets what. DB queries will queue up
while a lock is in place so batches will occur on first come first
served basis. I had thought this was for a background script. This will
save your script from having to browse the filesystem files, sort by
age, etc. Instead put an index on the ID of the file and grab the X
lowest IDs.

Cheers,
Rob.
-- 
http://www.interjinn.com
Application and Templating Framework for PHP


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] file locking...

2009-03-01 Thread bruce
hi rob...

what you have written is similar to my initial approach... my question, and
the reason for posting this to a few different groups.. is to see if someone
has pointers/thoughts for something much quicker...

this is going to handle processing requests from client apps to a
webservice.. the backend of the service has to quickly process the files in
the dir as fast as possible to return the data to the web client query...

thanks



-Original Message-
From: Robert Cummings [mailto:rob...@interjinn.com]
Sent: Sunday, March 01, 2009 9:54 AM
To: bruce
Cc: php-general@lists.php.net
Subject: RE: [PHP] file locking...


On Sun, 2009-03-01 at 09:09 -0800, bruce wrote:
> hi rob...
>
> here's the issue in more detail..
>
> i have multiple processes that are generated/created and run in a
> simultaneous manner. each process wants to get XX number of files from the
> same batch of files... assume i have a batch of 50,000 files. my issue is
> how do i allow each of the processes to get their batch of unique files as
> fast as possible. (the 50K number is an arbotrary number.. my project will
> shrink/expand over time...
>
> if i dump all the 50K files in the same dir, i can have a lock file that
> would allow each process to sequentially read/write the lock file, and
then
> access the dir to get the XX files the process is needing. (each process
is
> just looking to get the next batch of files for processing. there's no
> searching based on text in the name of the files. it's a kind of fifo
queing
> system) this approach could work, but it's basically sequential, and could
> in theory get into race conditions regarding the lockfile.
>
> i could also have the process that creates the files, throw the files in
> some kind of multiple directory processes, where i split the 50K files
into
> separate dirs and somehow implement logic to allow the cient process to
> fetch the files from the unique/separate dirs.. but this could get ugly.
>
> so my issue is essentially how can i allow as close to simultaneous access
> by client/child processes to a kind of FIFO of files...
>
> whatever logic i create for this process, will also be used for the next
> iteration of the project, where i get rid of the files.. and i use some
sort
> of database as the informational storage.
>
> hopefully this provides a little more clarity.

Would I be right in assuming that a process grabs X of the oldest
available files and then begins to work on them. Then the next process
would essentially grab the next X oldest files so on and so forth over
and over again? Also is the file discarded once processed? Would I be
correct in presuming that processing of the files takes longer than
grabbing the files wanted? If so then I would have a single lock upon
which all processes wait. Each process grabs the lock when it can and
then moves X oldest files to a working directory where it can then
process them.

So... directory structure:

/ROOT
/ROOT/queue
/ROOT/work

Locks...

/ROOT/lock

So let's say you have 500 files:

/ROOT/queue/file_001.dat
/ROOT/queue/file_002.dat
/ROOT/queue/file_003.dat
...
/ROOT/queue/file_499.dat
/ROOT/queue/file_500.dat

And you have 5 processes...

/proc/1
/proc/2
/proc/3
/proc/4
/proc/5

Now to start all processes try to grab the lock at the same time, by
virtue of lock mechanics only one process gets the lock... let's say for
instance 4 While 4 has the lock all the other processes go to sleep
for say... 1 usecs... upon failing to get the lock.

So process 4 transfers file_001.dat through to file_050.dat
into /ROOT/work.

/ROOT/work/file_001.dat
/ROOT/work/file_002.dat
/ROOT/work/file_003.dat
...
/ROOT/work/file_049.dat
/ROOT/work/file_050.dat

Then it releases the lock and begins processing meanwhile the other
processes wake up and try to grab the lock again... this time PID 2 gets
it. It does the same...

/ROOT/work/file_043.dat
/ROOT/work/file_044.dat
/ROOT/work/file_045.dat
...
/ROOT/work/file_049.dat
/ROOT/work/file_100.dat

/ROOT/queue/file_101.dat
/ROOT/queue/file_102.dat
/ROOT/queue/file_103.dat
...
/ROOT/queue/file_499.dat
/ROOT/queue/file_500.dat

Now while it was doing that PID 4 finished and all it's files are now
deleted. The first thing it does is try to get the lock so it can get
more... but it's still owned by PID 2 so PID 4 goes to sleep. Once PID 2
gets it's files it releases the lock and off it goes and the cycle
continued. Now there's still an issue with respect to incoming partially
written files. During the incoming process those should be written
elsewhere... lets say /ROOT/incoming. Once writing of the file is
complete it can be moved to /ROOT/queue. Also if you don't want
processes to delete the

RE: [PHP] file locking...

2009-03-01 Thread Robert Cummings
On Sun, 2009-03-01 at 09:09 -0800, bruce wrote:
> hi rob...
> 
> here's the issue in more detail..
> 
> i have multiple processes that are generated/created and run in a
> simultaneous manner. each process wants to get XX number of files from the
> same batch of files... assume i have a batch of 50,000 files. my issue is
> how do i allow each of the processes to get their batch of unique files as
> fast as possible. (the 50K number is an arbotrary number.. my project will
> shrink/expand over time...
> 
> if i dump all the 50K files in the same dir, i can have a lock file that
> would allow each process to sequentially read/write the lock file, and then
> access the dir to get the XX files the process is needing. (each process is
> just looking to get the next batch of files for processing. there's no
> searching based on text in the name of the files. it's a kind of fifo queing
> system) this approach could work, but it's basically sequential, and could
> in theory get into race conditions regarding the lockfile.
> 
> i could also have the process that creates the files, throw the files in
> some kind of multiple directory processes, where i split the 50K files into
> separate dirs and somehow implement logic to allow the cient process to
> fetch the files from the unique/separate dirs.. but this could get ugly.
> 
> so my issue is essentially how can i allow as close to simultaneous access
> by client/child processes to a kind of FIFO of files...
> 
> whatever logic i create for this process, will also be used for the next
> iteration of the project, where i get rid of the files.. and i use some sort
> of database as the informational storage.
> 
> hopefully this provides a little more clarity.

Would I be right in assuming that a process grabs X of the oldest
available files and then begins to work on them. Then the next process
would essentially grab the next X oldest files so on and so forth over
and over again? Also is the file discarded once processed? Would I be
correct in presuming that processing of the files takes longer than
grabbing the files wanted? If so then I would have a single lock upon
which all processes wait. Each process grabs the lock when it can and
then moves X oldest files to a working directory where it can then
process them.

So... directory structure:

/ROOT
/ROOT/queue
/ROOT/work

Locks...

/ROOT/lock
  
So let's say you have 500 files:

/ROOT/queue/file_001.dat
/ROOT/queue/file_002.dat
/ROOT/queue/file_003.dat
...
/ROOT/queue/file_499.dat
/ROOT/queue/file_500.dat

And you have 5 processes... 

/proc/1
/proc/2
/proc/3
/proc/4
/proc/5

Now to start all processes try to grab the lock at the same time, by
virtue of lock mechanics only one process gets the lock... let's say for
instance 4 While 4 has the lock all the other processes go to sleep
for say... 1 usecs... upon failing to get the lock.

So process 4 transfers file_001.dat through to file_050.dat
into /ROOT/work.

/ROOT/work/file_001.dat
/ROOT/work/file_002.dat
/ROOT/work/file_003.dat
...
/ROOT/work/file_049.dat
/ROOT/work/file_050.dat

Then it releases the lock and begins processing meanwhile the other
processes wake up and try to grab the lock again... this time PID 2 gets
it. It does the same...

/ROOT/work/file_043.dat
/ROOT/work/file_044.dat
/ROOT/work/file_045.dat
...
/ROOT/work/file_049.dat
/ROOT/work/file_100.dat

/ROOT/queue/file_101.dat
/ROOT/queue/file_102.dat
/ROOT/queue/file_103.dat
...
/ROOT/queue/file_499.dat
/ROOT/queue/file_500.dat

Now while it was doing that PID 4 finished and all it's files are now
deleted. The first thing it does is try to get the lock so it can get
more... but it's still owned by PID 2 so PID 4 goes to sleep. Once PID 2
gets it's files it releases the lock and off it goes and the cycle
continued. Now there's still an issue with respect to incoming partially
written files. During the incoming process those should be written
elsewhere... lets say /ROOT/incoming. Once writing of the file is
complete it can be moved to /ROOT/queue. Also if you don't want
processes to delete the files you can have yet another
directory /ROOT/processed. So with everything considered here's your
directory structure:

/ROOT
/ROOT/incoming
/ROOT/processed
/ROOT/queue
/ROOT/work

One last thing to consider is that if there are no available files on
which to work then you might have your processes sleep a little longer.

Cheers,
Rob.
-- 
http://www.interjinn.com
Application and Templating Framework for PHP


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] file locking...

2009-03-01 Thread bruce
hi rob...

here's the issue in more detail..

i have multiple processes that are generated/created and run in a
simultaneous manner. each process wants to get XX number of files from the
same batch of files... assume i have a batch of 50,000 files. my issue is
how do i allow each of the processes to get their batch of unique files as
fast as possible. (the 50K number is an arbotrary number.. my project will
shrink/expand over time...

if i dump all the 50K files in the same dir, i can have a lock file that
would allow each process to sequentially read/write the lock file, and then
access the dir to get the XX files the process is needing. (each process is
just looking to get the next batch of files for processing. there's no
searching based on text in the name of the files. it's a kind of fifo queing
system) this approach could work, but it's basically sequential, and could
in theory get into race conditions regarding the lockfile.

i could also have the process that creates the files, throw the files in
some kind of multiple directory processes, where i split the 50K files into
separate dirs and somehow implement logic to allow the cient process to
fetch the files from the unique/separate dirs.. but this could get ugly.

so my issue is essentially how can i allow as close to simultaneous access
by client/child processes to a kind of FIFO of files...

whatever logic i create for this process, will also be used for the next
iteration of the project, where i get rid of the files.. and i use some sort
of database as the informational storage.

hopefully this provides a little more clarity.

thanks


-Original Message-
From: Robert Cummings [mailto:rob...@interjinn.com]
Sent: Sunday, March 01, 2009 2:50 AM
To: bruce
Cc: php-general@lists.php.net
Subject: Re: [PHP] file locking...


On Sat, 2009-02-28 at 21:46 -0800, bruce wrote:
> Hi.
>
> Got a bit of a question/issue that I'm trying to resolve. I'm asking this
of
> a few groups so bear with me.
>
> I'm considering a situation where I have multiple processes running, and
> each process is going to access a number of files in a dir. Each process
> accesses a unique group of files, and then writes the group of files to
> another dir. I can easily handle this by using a form of locking, where I
> have the processes lock/read a file and only access the group of files in
> the dir based on the  open/free status of the lockfile.
>
> However, the issue with the approach is that it's somewhat synchronous.
I'm
> looking for something that might be more asynchronous/parallel, in that
I'd
> like to have multiple processes each access a unique group of files from
the
> given dir as fast as possible.
>
> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
> pointers to academic research, etc.. would be useful.

Threads? Or spawn off child processes. Maybe I'm not understanding your
issues well enough.

Cheers,
Rob.
--
http://www.interjinn.com
Application and Templating Framework for PHP


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking...

2009-03-01 Thread Robert Cummings
On Sat, 2009-02-28 at 21:46 -0800, bruce wrote:
> Hi.
> 
> Got a bit of a question/issue that I'm trying to resolve. I'm asking this of
> a few groups so bear with me.
> 
> I'm considering a situation where I have multiple processes running, and
> each process is going to access a number of files in a dir. Each process
> accesses a unique group of files, and then writes the group of files to
> another dir. I can easily handle this by using a form of locking, where I
> have the processes lock/read a file and only access the group of files in
> the dir based on the  open/free status of the lockfile.
> 
> However, the issue with the approach is that it's somewhat synchronous. I'm
> looking for something that might be more asynchronous/parallel, in that I'd
> like to have multiple processes each access a unique group of files from the
> given dir as fast as possible.
> 
> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
> pointers to academic research, etc.. would be useful.

Threads? Or spawn off child processes. Maybe I'm not understanding your
issues well enough.

Cheers,
Rob.
-- 
http://www.interjinn.com
Application and Templating Framework for PHP


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking...

2009-03-01 Thread Ashley Sheridan
On Sat, 2009-02-28 at 21:46 -0800, bruce wrote:
> Hi.
> 
> Got a bit of a question/issue that I'm trying to resolve. I'm asking this of
> a few groups so bear with me.
> 
> I'm considering a situation where I have multiple processes running, and
> each process is going to access a number of files in a dir. Each process
> accesses a unique group of files, and then writes the group of files to
> another dir. I can easily handle this by using a form of locking, where I
> have the processes lock/read a file and only access the group of files in
> the dir based on the  open/free status of the lockfile.
> 
> However, the issue with the approach is that it's somewhat synchronous. I'm
> looking for something that might be more asynchronous/parallel, in that I'd
> like to have multiple processes each access a unique group of files from the
> given dir as fast as possible.
> 
> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
> pointers to academic research, etc.. would be useful.
> 
> thanks
> 
> 
> 
> 
You could do it one of several ways:

1. Have the files actually written to a subversion/git repository, and
let that handle differences.
2. Store the files in a database as blobs
3. Do something clever with filename suffixes to indicate versions of
the file


Ash
www.ashleysheridan.co.uk


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] File Locking during *other* file operations

2004-12-23 Thread Robinson, Matthew
No really good reason that I can think of. I don't see any reason as to
why it wouldn't work with just 'x'.

Must have been having a 'beer' moment.

yes, it should probably clear out stale lock files but the files I
protect with this are better left untouched if the lock fails. I think
that locks fail for a reason and you should find that reason before you
unlock and potentially break something.

M 

-Original Message-
From: Richard Lynch [mailto:[EMAIL PROTECTED] 
Sent: 20 December 2004 18:26
To: Robinson, Matthew
Cc: Michael Sims; php-general
Subject: RE: [PHP] File Locking during *other* file operations

Robinson, Matthew wrote:
>  I use this code, Not all my own, some from the php manual (probably 
> most of it in fact) I lock the file as filename.lock so that I can 
> muck about with it completely and then unlock the .lock and remove it.
>
> M
>
> function LockFile($file)
> {
>
> $LockFile = $file . ".lock";# Lock the
file
> $lf = fopen ($LockFile, "wx");
>
> while ($lf === FALSE && $i++ < 20)
> {
> clearstatcache();
> usleep(rand(5,85));
> $lf = @fopen ($LockFile, 'x');


How come you use "wx" up there, and just 'x' here?

Is there some reason for that?

One may (or may not) want to consider a mechanism for throwing out
really old lock files, since it's possible your PHP script or
application would eventually fail to remove a lock file...  Or not,
depending on how you code the rest of it.

--
Like Music?
http://l-i-e.com/artists.htm



This message has been checked for all known viruses by the 
CitC Virus Scanning Service powered by SkyLabs. For further information
visit
http://www.citc.it

___


This message has been checked for all known viruses by the 
CitC Virus Scanning Service powered by SkyLabs. For further information visit
http://www.citc.it

___

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] File Locking during *other* file operations

2004-12-20 Thread Richard Lynch
Robinson, Matthew wrote:
>  I use this code, Not all my own, some from the php manual (probably
> most of it in fact) I lock the file as filename.lock so that I can muck
> about with it completely and then unlock the .lock and remove it.
>
> M
>
> function LockFile($file)
> {
>
> $LockFile = $file . ".lock";# Lock the file
> $lf = fopen ($LockFile, "wx");
>
> while ($lf === FALSE && $i++ < 20)
> {
> clearstatcache();
> usleep(rand(5,85));
> $lf = @fopen ($LockFile, 'x');


How come you use "wx" up there, and just 'x' here?

Is there some reason for that?

One may (or may not) want to consider a mechanism for throwing out really
old lock files, since it's possible your PHP script or application would
eventually fail to remove a lock file...  Or not, depending on how you
code the rest of it.

-- 
Like Music?
http://l-i-e.com/artists.htm

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] File Locking during *other* file operations

2004-12-20 Thread Robinson, Matthew
 I use this code, Not all my own, some from the php manual (probably
most of it in fact) I lock the file as filename.lock so that I can muck
about with it completely and then unlock the .lock and remove it.

M

function LockFile($file)
{

$LockFile = $file . ".lock";# Lock the file
$lf = fopen ($LockFile, "wx");

while ($lf === FALSE && $i++ < 20)
{
clearstatcache();
usleep(rand(5,85));
$lf = @fopen ($LockFile, 'x');
}

if ($lf !== False)
{
return array ($lf,$LockFile);
} else {
return FALSE;
}
}

###
#
# UnLockFile()
#

function UnLockFile($LockArray)
{
list($lf, $LockFile) = $LockArray;
fclose($lf);
unlink($LockFile);
}

-Original Message-
From: Michael Sims [mailto:[EMAIL PROTECTED] 
Sent: 17 December 2004 15:28
To: php-general
Subject: RE: [PHP] File Locking during *other* file operations

Gerard Samuel wrote:
> Im talking about file locking during deleting, and moving files.
> Is it possible to perform file locking for these file operations?

Yes, have your scripts attempt to lock a separate lock file before
performing deleting or moving operations.

--
PHP General Mailing List (http://www.php.net/) To unsubscribe, visit:
http://www.php.net/unsub.php



This message has been checked for all known viruses by the 
CitC Virus Scanning Service powered by SkyLabs. For further information
visit
http://www.citc.it

___


This message has been checked for all known viruses by the 
CitC Virus Scanning Service powered by SkyLabs. For further information visit
http://www.citc.it

___

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File Locking during *other* file operations

2004-12-17 Thread Gerard Samuel
Michael Sims wrote:
Yes, have your scripts attempt to lock a separate lock file before performing
deleting or moving operations.
 

Jason Barnett wrote:
Sure... just acquire the exclusive lock like you would for writing to 
a file, but do the delete / move instead.  Your apps must actually try 
to flock before *every* file operation though if you expect the 
locking system to work.

Thanks.  I'll try to see what I can come up with...
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


RE: [PHP] File Locking during *other* file operations

2004-12-17 Thread Michael Sims
Gerard Samuel wrote:
> Im talking about file locking during deleting, and moving
> files.
> Is it possible to perform file locking for these file operations?

Yes, have your scripts attempt to lock a separate lock file before performing
deleting or moving operations.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File locking in PHP???

2004-07-15 Thread Curt Zirzow
* Thus wrote Scott Fletcher:
> Nah!  I'll settle for a simplier one...   file_exists() by checking to see
> if the file exist then spit out the error message.  Meaning the file is in
> use...

Don't use file_exists() for that, it will fail miserable with
racing conditions. a better more portable way would be to use
mkdir():

if (mkdir('mylockdir', 0755) ) {
  // we've obtained a lock 
  // do stuff 

  // and unlock it
  rmdir('mylockdir');
} else {

  // unable to obtain lock

}


Curt
-- 
First, let me assure you that this is not one of those shady pyramid schemes
you've been hearing about.  No, sir.  Our model is the trapezoid!

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File locking in PHP???

2004-07-15 Thread Scott Fletcher
Nah!  I'll settle for a simplier one...   file_exists() by checking to see
if the file exist then spit out the error message.  Meaning the file is in
use...

FletchSOD

"Matt M." <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> > Hi!  I saw the php function flock(), since I never used it before so I
> > thought I would ask you folks a couple of questions.
>
> did you read all of the user comments on http://us2.php.net/flock
>
> There is a bunch of good info in there

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File locking in PHP???

2004-07-15 Thread Scott Fletcher
Yea, read that, very good info there.  Alright, I'll make one from scratch
and do some testing to find what need to be add/change/remove to make it
more a rock solid script.  Boy, it remind me of Perl.

Thanks,
 FletchSOD

"Matt M." <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> > Hi!  I saw the php function flock(), since I never used it before so I
> > thought I would ask you folks a couple of questions.
>
> did you read all of the user comments on http://us2.php.net/flock
>
> There is a bunch of good info in there

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] File locking in PHP???

2004-07-15 Thread Matt M.
> Hi!  I saw the php function flock(), since I never used it before so I
> thought I would ask you folks a couple of questions.

did you read all of the user comments on http://us2.php.net/flock

There is a bunch of good info in there

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking over NFS?

2004-07-06 Thread Manuel Lemos
Hello,
On 07/06/2004 12:56 PM, Kyle wrote:
Hi Manuel,
Do you mean I can just use flock over NFS ?
It depends. You need to check whether the NFS server that you use 
supports it.

--
Regards,
Manuel Lemos
PHP Classes - Free ready to use OOP components written in PHP
http://www.phpclasses.org/
PHP Reviews - Reviews of PHP books and other products
http://www.phpclasses.org/reviews/
Metastorage - Data object relational mapping layer generator
http://www.meta-language.net/metastorage.html
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


Re: [PHP] file locking over NFS?

2004-07-06 Thread kyle
Hi Manuel,

Do you mean I can just use flock over NFS ?

"Manuel Lemos" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hello,
>
> On 07/06/2004 05:25 AM, Kyle wrote:
> > I found this from php function list manual:
> > "flock() will not work on NFS and many other networked file systems.
Check
> > your operating system documentation for more details."
> >
> > I wish flock can do NFS file locking, but offical manual says no to
>
> This is not accurate. There NFS systems that were meant to support
locking.
>
> -- 
>
> Regards,
> Manuel Lemos
>
> PHP Classes - Free ready to use OOP components written in PHP
> http://www.phpclasses.org/
>
> PHP Reviews - Reviews of PHP books and other products
> http://www.phpclasses.org/reviews/
>
> Metastorage - Data object relational mapping layer generator
> http://www.meta-language.net/metastorage.html

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking over NFS?

2004-07-06 Thread Manuel Lemos
Hello,
On 07/06/2004 05:25 AM, Kyle wrote:
I found this from php function list manual:
"flock() will not work on NFS and many other networked file systems. Check
your operating system documentation for more details."
I wish flock can do NFS file locking, but offical manual says no to
This is not accurate. There NFS systems that were meant to support locking.
--
Regards,
Manuel Lemos
PHP Classes - Free ready to use OOP components written in PHP
http://www.phpclasses.org/
PHP Reviews - Reviews of PHP books and other products
http://www.phpclasses.org/reviews/
Metastorage - Data object relational mapping layer generator
http://www.meta-language.net/metastorage.html
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


Re: [PHP] file locking over NFS?

2004-07-06 Thread kyle
I found this from php function list manual:
"flock() will not work on NFS and many other networked file systems. Check
your operating system documentation for more details."

I wish flock can do NFS file locking, but offical manual says no to

"Brent Clark" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi there
>
> If i am not mistaken, that is a standard part of the nfs suite.
> All you need to make sure is that your export is correct, and the you are
> not using the
> -nolock option.
> Other than that if its PHP you more interested in, look at the flock()
> function.
>
> Kind Regards
> Brent Clark
>
> -Original Message-
> From: kyle [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 05, 2004 6:54 PM
> To: [EMAIL PROTECTED]
> Subject: [PHP] file locking over NFS?
>
>
> Hi all,
>
> Is there any simple, safe, and efficiency way to do file locking over NFS?
>
> Thanks.
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking over NFS?

2004-07-06 Thread kyle
Thanks a lot , your information is really useful for me.

"Marek Kilimajer" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> kyle wrote:
> > Hi all,
> >
> > Is there any simple, safe, and efficiency way to do file locking over
NFS?
> >
> > Thanks.
> >
>
> You can mysql locks (maybe it works in other databases too):
>
> GET_LOCK(str,timeout);
> RELEASE_LOCK(str);

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] file locking over NFS?

2004-07-05 Thread Brent Clark
Hi there

If i am not mistaken, that is a standard part of the nfs suite.
All you need to make sure is that your export is correct, and the you are
not using the
-nolock option.
Other than that if its PHP you more interested in, look at the flock()
function.

Kind Regards
Brent Clark

-Original Message-
From: kyle [mailto:[EMAIL PROTECTED]
Sent: Monday, July 05, 2004 6:54 PM
To: [EMAIL PROTECTED]
Subject: [PHP] file locking over NFS?


Hi all,

Is there any simple, safe, and efficiency way to do file locking over NFS?

Thanks.

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking over NFS?

2004-07-05 Thread Marek Kilimajer
kyle wrote:
Hi all,
Is there any simple, safe, and efficiency way to do file locking over NFS?
Thanks.
You can mysql locks (maybe it works in other databases too):
GET_LOCK(str,timeout);
RELEASE_LOCK(str);
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


Re: [PHP] file locking question

2004-06-21 Thread Michal Migurski
> I have this script whereby I use the php function file().
> Just a quick question, do I need to implement file locking.

If you want to ensure that the file is not changed by other scripts while
you're reading from it, yes. Note that this is advisory only - writer
scripts will only respect the lock if they request a write lock. See
manual for details.

> if so, can I just use something like
>   flock( file("/path/file.ext") );

flock() takes a file handle as an argument, not an array of strings:

$fh = fopen("/path/file.ext", 'r');
flock($fh, LOCK_SH);
// read from file here
flock($fh, LOCK_UN);
fclose($fh);

See example 1 in the manual page for flock for more info.

-
michal migurski- contact info and pgp key:
sf/cahttp://mike.teczno.com/contact.html

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking question

2004-06-21 Thread Daniel Clark
If you're just going to read only, no.

>>Hi all
>>
>>I have this script whereby I use the php function file().
>>Just a quick question, do I need to implement file locking.
>>
>>if so, can I just use something like
>>  flock( file("/path/file.ext") );
>>
>>Kind Regards
>>Brent Clark

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] file locking question

2004-06-21 Thread Marek Kilimajer
Brent Clark wrote --- napísal::
Hi all
I have this script whereby I use the php function file().
Just a quick question, do I need to implement file locking.
if so, can I just use something like
flock( file("/path/file.ext") );
No, you need file locking only for writing.
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


Re: [PHP] File locking problem

2002-11-08 Thread Charles Wiltgen
Ernest E Vogelsinger wrote...

>> This has to be a PHP bug, which I'd be happy to file if someone more
>> experienced could confirm that it isn't stupid user error.
> 
> I don't believe it has something to do with PHP, much more with the FTP server
> you're accessing... This might delay the actual flushing for what reason ever.
> I don't believe PHP buffers files differently if they are accessed using an
> fopen url wrapper.

The Linux system I'm FTPing to is using ProFTPD, which as I understand it is
one of the best open-source servers out there.

I will add a note to the documentation along the lines of "If you fopen() to
an FTP URI, write some stuff, fflush() and ffclose(), that doesn't mean the
file is complete (or that it even exists).  Check for both of these things
before you do anything with the file."

Thanks!

-- Charles Wiltgen


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-08 Thread Ernest E Vogelsinger
At 16:50 08.11.2002, Charles Wiltgen spoke out and said:
[snip]
>I tried require() too, but it made no difference.  I later learned that if
>you fopen(), write some stuff, fflush() and ffclose(), that doesn't mean the
>file is complete (or that it even exists).
>
>This has to be a PHP bug, which I'd be happy to file if someone more
>experienced could confirm that it isn't stupid user error.
[snip] 

I don't believe it has something to do with PHP, much more with the FTP
server you're accessing... This might delay the actual flushing for what
reason ever. I don't believe PHP buffers files differently if they are
accessed using an fopen url wrapper.


-- 
   >O Ernest E. Vogelsinger 
   (\) ICQ #13394035 
^ http://www.vogelsinger.at/



Re: [PHP] File locking problem

2002-11-08 Thread Charles Wiltgen
Krzysztof Dziekiewicz wrote...

> Do you use "include" or "require". In such situation you should not use
> "include".

I tried require() too, but it made no difference.  I later learned that if
you fopen(), write some stuff, fflush() and ffclose(), that doesn't mean the
file is complete (or that it even exists).

This has to be a PHP bug, which I'd be happy to file if someone more
experienced could confirm that it isn't stupid user error.

For a workaround, after I fflush() and fclose() I while() until the file
exists, and then while() until it's larger than 10 bytes.  I should while()
until it's the size of the data I've written, but this method has been
reliable so far.

-- 
Charles Wiltgen

   "Well, once again my friend, we find that science is a two-headed beast.
One head is nice, it gives us aspirin and other modern conveniences...
but the other head of science is bad!  Oh beware the other head of
science...it bites!" -- The Tick





-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-08 Thread Krzysztof Dziekiewicz
> I'm having file locking problems.

> I'm using fopen() to write a file via FTP.  At the end, I'm doing...

> fflush($fp);
> fclose($fp);

> ...and then I include it immediately after.  But many times I only get part
> of what I wrote to the file, which suggests that it wasn't really flushed
> and closed properly.

Do you use "include" or "require". In such situation you should not use
"include".




-- 
Krzysztof Dziekiewicz


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-07 Thread Charles Wiltgen
Marco Tabini wrote...

> Ok, here's another possibly stupid solution.

Not at all.  My solution was not far from that -- I have to wait for the
file to exist, and then to have something in it, and then include it.  (See
my "PHP fopen() bug + solution" post.)

Thank you,

-- Charles Wiltgen


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-07 Thread Marco Tabini
Ok, here's another possibly stupid solution. Have you tried (a) setting
a pause (like 2 secs) between when you end writing and include the file
or (b) writing the file, then refreshing the page with a parameter and
including it only then? In the latter case, terminating the script and
refreshing it might cause the o/s to properly flush the file buffers.

Just a couple of thoughts. Hope this helps.


Marco
-
php|architect -- The Monthly Magazine For PHP Professionals
Come visit us on the web at http://www.phparch.com!

On Thu, 2002-11-07 at 17:58, Charles Wiltgen wrote:
> Marco Tabini wrote...
> 
> > 1) What OS are you using?
> 
> Linux.
> 
> > 2) Does the file include PHP code?
> 
> Yes.
> 
> > If it contains PHP code, are you sure that there aren't any errors in the PHP
> > code?
> 
> Yes.  The resulting XHTML validates when the include works (more than half
> the time).  The rest of the time, the file is being created correctly but
> not included correctly.
> 
> I've tried to include() and require().  No error is ever generated, I
> suspect because the file exists, and include/require are being called while
> the file is still open (even though I've flushed and closed it).  Shoot.
> 
> -- Charles Wiltgen
> 
> 
> -- 
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
> 



-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-07 Thread Charles Wiltgen
Marco Tabini wrote...

> 1) What OS are you using?

Linux.

> 2) Does the file include PHP code?

Yes.

> If it contains PHP code, are you sure that there aren't any errors in the PHP
> code?

Yes.  The resulting XHTML validates when the include works (more than half
the time).  The rest of the time, the file is being created correctly but
not included correctly.

I've tried to include() and require().  No error is ever generated, I
suspect because the file exists, and include/require are being called while
the file is still open (even though I've flushed and closed it).  Shoot.

-- Charles Wiltgen


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-07 Thread Marco Tabini
Charles,

2 questions--

1) What OS are you using?
2) Does the file include PHP code? Otherwise, can you keep its contents
in a string and simply output that string? If it contains PHP code, are
you sure that there aren't any errors in the PHP code?

Ok, so it was a bit more than 2 questions :-)


Marco
-- 

php|architect - The magazine for PHP Professionals
The first monthly worldwide  magazine dedicated to PHP programmer

Come visit us at http://www.phparch.com!


--- Begin Message ---
Charles Wiltgen wrote...

> I'm having file locking problems.
> 
> I'm using fopen() to write a file via FTP.  At the end, I'm doing...
> 
>   fflush($fp);
>   fclose($fp);
> 
> ...and then I include it immediately after.  But many times I only get part
> of what I wrote to the file, which suggests that it wasn't really flushed
> and closed properly.

BTW, when I only get part of the file in the browser, I open it with Pico
and it's all there.  So I know that $stuff = ob_end_clean() is working, and
that $stuff is being written to the fopen(ftp://...) pointer correctly.

I suspect that PHP is including before the file is closed, but not returning
an error.  I haven't found any other mechanisms in PHP that would allow me
to verify that the file is really, really, really closed after using
fflush() and fclose() on an fopen("ftp://...";).  Any ideas?

-- Charles Wiltgen


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



--- End Message ---
-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


Re: [PHP] File locking problem

2002-11-07 Thread Charles Wiltgen
Charles Wiltgen wrote...

> I'm having file locking problems.
> 
> I'm using fopen() to write a file via FTP.  At the end, I'm doing...
> 
>   fflush($fp);
>   fclose($fp);
> 
> ...and then I include it immediately after.  But many times I only get part
> of what I wrote to the file, which suggests that it wasn't really flushed
> and closed properly.

BTW, when I only get part of the file in the browser, I open it with Pico
and it's all there.  So I know that $stuff = ob_end_clean() is working, and
that $stuff is being written to the fopen(ftp://...) pointer correctly.

I suspect that PHP is including before the file is closed, but not returning
an error.  I haven't found any other mechanisms in PHP that would allow me
to verify that the file is really, really, really closed after using
fflush() and fclose() on an fopen("ftp://...";).  Any ideas?

-- Charles Wiltgen


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-06 Thread Charles Wiltgen
Marco Tabini wrote...

> Just a (possibly stupid) suggestion--is it possible that the file is being
> overwritten by another instance of your script that's run in the meantime?

This may also be a problem at some point, but currently I'm just trying to
get it working in an test environment where only one instance is running.

I know I can just use another empty file as a semaphore, but my performance
is already killed by having to use FTP to create and delete files with PHP,
and I need to do whatever I can do avoid it.

Basically, it appears that using fflush() and fclose() is no guarantee that
a file is completely written, and I'm wondering what I can do besides insert
sleep() statements.

-- 
Charles Wiltgen

  "Love is a snowmobile racing across the tundra and
   then suddenly it flips over, pinning you underneath.
   At night, the ice weasels come." - Nietzsche (Groening)
 



-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File locking problem

2002-11-06 Thread Marco Tabini
Just a (possibly stupid) suggestion--is it possible that the file is
being overwritten by another instance of your script that's run in the
meantime?

-
php|architect -- The Magazine for PHP Professionals
Check us out on the web at http://www.phparch.com

On Wed, 2002-11-06 at 23:06, Charles Wiltgen wrote:
> Hello,
> 
> I'm having file locking problems.
> 
> I'm using fopen() to write a file via FTP.  At the end, I'm doing...
> 
> fflush($fp);
> fclose($fp);
> 
> ...and then I include it immediately after.  But many times I only get part
> of what I wrote to the file, which suggests that it wasn't really flushed
> and closed properly.
> 
> What else can I do?
> 
> Thanks,
> 
> -- 
> Charles Wiltgen
> 
>"Well, once again my friend, we find that science is a two-headed beast.
> One head is nice, it gives us aspirin and other modern conveniences...
> but the other head of science is bad!  Oh beware the other head of
> science...it bites!" -- The Tick
> 
> 
> 
> 
> 
> -- 
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
> 



-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




Re: [PHP] File Locking in PHP.

2001-02-19 Thread David Robley

On Tue, 20 Feb 2001 08:50, Matthew Toledo wrote:
> Hello, I am just starting to use PHP.  I have been using PERL for quite
> some time.  I was wondering if there is any specific FILE LOCKING I
> need to implement to keep more than one person from altering a text
> file at the same time.
>
> For instance, in PERL, you use the flock command to set up an advisory
> file lock.  Lets say that if I have a web page with a text based
> database, and more than one person wants to write to the file at the
> same time.  In perl, if you use flock(FILE, 2).  This causes person A
> to wait while person B writes to the file.  flock(FILE, 8) unlocks the
> file.  Then person A is allowed to write to the file.
>
> Does the perl equivalent of flock happen automatically in PHP?
>
> If not, how do you lock a file?
>
> Is it an advisory lock, or will it cause an error if two people try to
> access the same file at the same time for the same purpose (writing).
>
> Thanks,

The flock() function in PHP seems to do pretty much what you are looking 
for - it allows setting a shared or exclusive lock and you can control 
blocking while locking (at least according to the manual!)

-- 
David Robley| WEBMASTER & Mail List Admin
RESEARCH CENTRE FOR INJURY STUDIES  | http://www.nisu.flinders.edu.au/
AusEinet| http://auseinet.flinders.edu.au/
Flinders University, ADELAIDE, SOUTH AUSTRALIA

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]