Thanks for the reply... Here's my solution up to now.. might change in the
App has a bunch of clients that need to get a separate/unique list of files
from a master server app. The files are created by the master server
process, and reside on the filesystem behind the server process. (this is a
client/server based app. client sends a request to the server.. the backend
operation of the server fetches the required files, and returns them to the
A key issue is that I don't want to run into potential race conditions,
which would result in a given client never being served the files it's
trying to fetch.
1) Invoke a form of file locking, with each client processes waiting
until it gets its lock.
2) Invoke some form of round-robin process, where the master process
puts files in different dirs, so each client can have a better
chance of getting a "lock" for the different dir..
Final Soln: (for now)
I decided to cheat!
I realized that since each client process is essentially unique, I can
create a uniqueId (uuid) for each process. Remember, the client app is
hitting the master server/file process via a webservice. So I have each
client send it's uuid to the master server via the webprocess. this
information is appended to a file, which gives me kind of a fifo approach
for creating unique dirs for each client. the server (on the backend) then
reads the fifo file, for the uuid. in getting the uuid for the 'client', a
master cron process then reads the fifo file, and for each uuid in the file,
creates a tmp dir for the uuid. the master cron process then populates this
dir, with the required files for the given client.
on the client side, the client loops through a wait loop, checking to see if
anything is created/placed in its tmp 'uuid' dir.. if files are there, it
fetches the files, and proceeds..
This approach ensures that a client would never run into a situation where
it might never get files where files are available for processing. in the
event there are no files, the client simply sleeps until there are files..
in the event a client requests files via the sending of the uuid, and the
client dies before getting the files, but the master cron had already placed
them in the uuid dir.. there will be a cleanup process to reabsorb those
files back into the system...
thanks to all who gave input/pointers!!
Of Dennis Lee Bieber
Sent: Sunday, March 01, 2009 11:41 AM
Subject: Re: file locking...
On Sun, 1 Mar 2009 10:00:54 -0800, "bruce" <bedoug...@earthlink.net>
declaimed the following in comp.lang.python:
> Except in my situation.. the client has no knowledge of the filenaming
> situation, and i might have 1000s of files... think of the FIFO, first in,
> first out.. so i'm loking for a fast solution that would allow me to
> groups of say, 500 files, that get batched and processed by the client
My silly thoughts...
Main process creates temp/scratch directories for each subprocess;
spawn each subprocess, passing the directory path to it;
main process then just loops over the files moving them, one at a time,
to one of the temp/scratch directories, probably in cyclic order to
distribute the load;
when main/input directory is empty, sleep then check again (or, if the
OS supports it -- use some directory change notification) for new files.
Each subprocess only sees its files in the applicable temp/scratch
Wulfraed Dennis Lee Bieber KD6MOG
(Bestiaria Support Staff: web-a...@bestiaria.com)
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php