On Nov 29, 1:04 am, Andreas Fuchs <[EMAIL PROTECTED]> wrote:
> On Nov 28, 2:28 am, "Graham Barr" <[EMAIL PROTECTED]> wrote:
> I would never have spotted this. That was async_observer's logging
> function. Commenting that chunk out from async_observer seems to make
> the leak go away: After a few of these job things (where it would
> previously consume hundreds of MB), beanstalkd now consumes only about
> 444 kBytes, according to ps (-:
>
> So it seems like stats-job is leaking somewhere...
A little clarification. I have just done a few experiments "on the
wire", and it seems that mem usage for this usage pattern in 1.1 grows
if a worker has reserved a job, and does stats-job on it, which is
exactly the usage pattern that async_observer exhibits. Here's a test
case using the ruby beanstalk-client library:
In one ruby process, do:
require 'beanstalk-client'
bs = Beanstalk::Pool.new(['localhost:11300'])
while true; bs.yput({:oink => 'foo'}) ; sleep 1 ; end
In another ruby process:
require 'beanstalk-client'
while (j = bs.reserve) ; puts j.stats.inspect ; j.delete; true ; end
After a short while, you'll notice memory consumption growing by a
lot; the first few seconds, it's not very much, then it grows into the
hundreds of MB very quickly. The same behavior can be seen by using
peek_ready instead of reserve.
Hope this helps track down the stats-job leak.
Thanks,
Andreas.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"beanstalk-talk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/beanstalk-talk?hl=en
-~----------~----~----~----~------~----~------~--~---