but it feels quite heavy-handed. I like Tim's solution (not
surprising since it's more-or-less the same as the one which I posted
a few days ago -- in theory, anyway ;) Its elegant in its simplicity
and the way it uses the natural features of the database to get the
desired result.
I was thinking about them purely in the context of real-time
processing, but you're quite right, thanks for the correction.
On Jan 27, 10:15 am, lenz norb...@googlemail.com wrote:
you funnel your events into a queue and fetch one off the queue whenever a
process finished with the previous one.
I have tried using InnoDB instead of MyISAM, though this has created new
problems- the database is really slow and the row count changes every time.
In short I would rather stick with the devil I know for now.
I have managed to improve performance a bit by adding another field to store a
.
To: nzphpug@googlegroups.com
Subject: [phpug] Re: DBA's please - MySQL 5 PROGRESS UPDATE
I have tried using InnoDB instead of MyISAM, though this has created new
problems- the database is really slow and the row count changes every time.
In short I would rather stick with the devil I know for now.
I
Michael wrote:
I have tried using InnoDB instead of MyISAM, though this has created new
problems- the database is really slow and the row count changes every time.
In short I would rather stick with the devil I know for now.
I have managed to improve performance a bit by adding another
you funnel your events into a queue and fetch one off the queue whenever a
process finished with the previous one. this is exactly what you do with
your query. you have data, you pick one, you process it, you update and pick
the next record.
event queues are built for batch processing, actually
Michael wrote:
I have 16 - 25 simultaneous processes running selecting records (to process)
from a DB that has about 3.3 million lines, quite standard-
SELECT a,b WHERE x, y and z ORDER BY rand() LIMIT 1
Works fine up to about 8-10 processes, and then goes exponentially slower
until
I always enjoy reading and considering your questions, and often I am
entertained or even staggered by the 'solutions' you cook up.
However, what problem are you trying to solve by spawning 16 - 25
versions of the same process? My suspicion is that you are trying to
efficiently maximise resource
So anyway.. if you're still convinced you want 20 processes to do the
same job as the one, Dmitry's suggestion of SELECT ... FOR UPDATE will
'transactionalise' the database accesses and solve your
synchronisation problems.
And this post is somewhat entertaining reading (nonwithstanding that
On Mon, 26 Jan 2009 14:51:29 lenz wrote:
you die with the updates/inserts in the DB as they lock the selects. to
speed things up here consider the usage of indexes on all columns that you
use after the where ... guess thats a start. consider using some other
mechanism might be an idea as
I had a little think about how I might solve this problem if I didn't
have ways to control database synchronisation, and came up with this:
(described in MongrelCode (TM) ;)
-
// Get all available records
get recordset result for SELECT a,b WHERE x, y and z and DONE=no
into $rs
//
ever thought of using an event queue? looks much like a problem solved in
the past already. using mysql for it might not be the best approach though.
look at amazon simple queue or something comparable, it would be a way
better fit i guess.push requests to a queue and only store the results in
the
Michael:
I have 16 - 25 simultaneous processes running selecting records (to process)
from a DB that has about 3.3 million lines, quite standard-
SELECT a,b WHERE x, y and z ORDER BY rand() LIMIT 1
Works fine up to about 8-10 processes, and then goes exponentially slower
until queries
13 matches
Mail list logo