I am prototyping a queueing service for my company that utilizes a couple 
thousand tubes.  I have a test case where I run 3000 tubes with 1200 jobs 
each in them.  Those tubes are serviced by a distributor that takes work 
and funnels it to a couple of much larger work queues where the real work 
happens.  I have 3 clients with 10 threads each processing these larger 
tubes and sending results into yet another tube.

I use delay in the many feeder tubes.  I do not use delay in the large work 
queues.  All priorities are the same.

I was having some sporadic crashes due to connection issues in 1.8 so I 
upgraded to 1.9.  Since then the crashing has gone away but my test case 
takes 3x longer to run and CPU is maxed out the entire time.  With 1.8 CPU 
would spike during the initial load of the 3000 queues X 1200 jobs but then 
would settle down to very little during the time the queues were being 
serviced.

In terms of real numbers, my system used to process 600k jobs through the 
entire system in about 5 minutes.  It is now taking over 15.

I am testing on OSX and my clients are on the same machine as beanstalkd.

Does anyone have any advice or help to offer?

Much appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/beanstalk-talk.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to