you die with the updates/inserts in the DB as they lock the selects. to speed things up here consider the usage of indexes on all columns that you use after the where ... guess thats a start. consider using some other mechanism might be an idea as well. depending on your complexity of the data, mysql might not be the first choice, a simple key/value database might do the trick at a higher performance, try postgresql and compare it for this use case to mysql. just looking into one direction might not always do the trick when things start to grow :-) cheers lenz
On Mon, Jan 26, 2009 at 2:24 PM, Michael <[email protected]> wrote: > > > > So anyway.. if you're still convinced you want 20 processes to do the > > same job as the one, Dmitry's suggestion of SELECT ... FOR UPDATE will > > 'transactionalise' the database accesses and solve your > > synchronisation problems. > > And this post is somewhat entertaining reading (nonwithstanding that I will > take on board your recommendation above) because it assumes that > the 'processing' that the script is doing is for the Database, when in > reality it's not. > > All the DB does is keep a track of the over all 'list' on work to be done. > > And yes, as many processes as I can throw at it certainly speeds things up. > > The limit at present is caused the the before mentioned DB issues. > > Michael > > > > -- iWantMyName.com painless domain registration (finally) --~--~---------~--~----~------------~-------~--~----~ NZ PHP Users Group: http://groups.google.com/group/nzphpug To post, send email to [email protected] To unsubscribe, send email to [email protected] -~----------~----~----~----~------~----~------~--~---
