On Monday, August 17, 2015 at 8:23:41 AM UTC-3, Jens Rantil wrote:
>
> Hi,
>
>    1. For a lot of queued tasks, will beanstalkd off-load to disks if I 
>    happen to queue too much when using write-ahead log? I am thinking of 
>    high-water marking or similar. If beanstalkd does not handle this, how are 
>    you guys handling occasions when the queue might fill up? Regular polling 
>    of queue size?
>
> I made some stress tests on beanstalkd some years ago using Solaris as 
host. I just punched millions and millions of job until beanstalkd starts 
to refuses new jobs - on that server, the process memory allocated up to 4G 
before refusing jobs.

The jobs are actively refused, your producer(s) is(are) notified and should 
be able to cache his(their) jobs until the server starts to accept them 
again.

Once I enabled the consumers, the beanstalkd started to accept new jobs 
normally. I spawned some extra consumers in order to catch up, and then 
beanstalkd started to give memory back to the system. No memory leaks. No 
accepted job was lost.

I don't remember my job sizes, but for sure they didn't reached 256kb.

Since then, I never stressed up beanstalkd again - so perhaps the nowadays 
mileage may vary.

-- 
Lisias

-- 
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/beanstalk-talk.
For more options, visit https://groups.google.com/d/optout.

Reply via email to