in reply to everyone:

  yes, I should have simply taken the time to do a stress test
of this myself; I ended up throwing a rabbitmq up that day cause
I was short on time....to everyone else I strongly encourage everyone
to follow Dan's example -- cause you won't know anything without benching..

although, Tim definitely had an extremely good idea which I almost implemented
myself

  also... if I had taken 2 seconds to realize that -z would ѕet 
the default size maybe I wouldn't have freaked out so much

anyways lessons to be learned here: bench, bench, bench... and read your docs...

thanks,
Ian

On 15:39 Wed 26 Aug     , Keith Rarick wrote:
> 
> On Tue, Aug 25, 2009 at 10:00 AM, Ian Eyberg<[email protected]> wrote:
> > I have just delved into this recently but I noticed that when I changed my
> > max job size from the default of 65535 to 2097152 (around 2 meg) my 
> > performance
> > hit the toilet -- what are some max job sizes that you guys use?
> 
> I always make small jobs (around 100 bytes), but there is no reason,
> in principle, not to make big jobs.
> 
> > Maybe someone can tell me whether or not beanstalk should even be considered
> > when it comes to larger job sizes of 2meg or so?
> 
> I simply haven't done any performance testing of large jobs. I'm sure
> there are improvements to be made. Here's a ticket:
> 
> http://github.com/kr/beanstalkd/issues/#issue/18
> 
> I'll try to get to it soon after releasing 1.4.
> 
> kr
> 
> > 

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/beanstalk-talk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to