2009/3/1 Robert Cummings <rob...@interjinn.com>

> On Sun, 2009-03-01 at 10:05 -0800, bruce wrote:
> > hi rob...
> >
> > what you have written is similar to my initial approach... my question,
> and
> > the reason for posting this to a few different groups.. is to see if
> someone
> > has pointers/thoughts for something much quicker...
> >
> > this is going to handle processing requests from client apps to a
> > webservice.. the backend of the service has to quickly process the files
> in
> > the dir as fast as possible to return the data to the web client query...
> Then use a database to process who gets what. DB queries will queue up
> while a lock is in place so batches will occur on first come first
> served basis. I had thought this was for a background script. This will
> save your script from having to browse the filesystem files, sort by
> age, etc. Instead put an index on the ID of the file and grab the X
> lowest IDs.

A database would be the best way to do this, but I've need to handle this
situation with files in the past and this is the solution I came up with...

1) Get the next filename to process
2) Try to move it to /tmp/whatever.<pid>
3) Check to see if /tmp/whatever.<pid> exists, and if it does process it
then delete it or move it to an archive directory
4) Repeat until there are no files left to process

I have this running on a server that processes several million files a day
without any issues.

For database-based queues I use a similar system but the move is replaced by
an update which sets the pid field of a single row. I then do a select where
that pid is my pid and process whatever comes back. I have several queues
that use this system and combined they're handling 10's of millions of queue
items per day without any problems, with the advantage that I can scale
across servers as well as processes.



Reply via email to