Any chance of releasing a minor update, say 1.9.1, to include this fix? On Wednesday, November 13, 2013 11:25:40 AM UTC+1, Jacob wrote: > > I've fixed the issue by re-introducing the 10ms limit on calling > prottick(), which updates the epoll/kqueue timeout period. This maintains > the benefits of the adaptive timeout introduced in > https://github.com/kr/beanstalkd/pull/169 while getting the v1.8 > performance back! > > See https://github.com/kr/beanstalkd/issues/200 > > Regards, > Jacob > > On Friday, October 18, 2013 11:46:18 AM UTC+2, Jacob wrote: >> >> Being responsible for commit 1d191ba26b20f402cc8e8ee3a7f7b0, I want to >> chime in on this. Maybe we together can come up with a way to improve the >> performance, and also keep the benefits provided by the commit. Here is the >> background and reasoning for the commit: >> >> The commit is related to issue #169 ( >> https://github.com/kr/beanstalkd/pull/169). >> >> The commit changes the wait interval between forced polling of open >> sockets for new data from (before the commit) every 10ms, to the minimum of >> the time until the next deadline, a pause expires or 1 hour has passed >> since last poll. The calculation of this "minimum" wait interval happens in >> protick ( >> https://github.com/kr/beanstalkd/blob/1d191ba26b20f402cc8e8ee3a7f7b0de9f1ff78c/prot.c#L1836 >> ). >> >> This "minimum" interval will likely be much larger than the previous >> default 10ms. Hence, the average time between forced polling is much >> longer. Significantly reducing beanstalk's load on the OS, solving the >> problem described in issue #169. >> >> My inspiration for the commit is the poll timeout handling in the event >> loop of the Python Tornado webserver. >> https://github.com/facebook/tornado/blob/master/tornado/ioloop.py#L601Without >> being a trained expert on this, I believe this is the right way to >> do epoll/kqueue socket poll timeout. >> >> If the socket is ready for read or write before the time interval has >> passed, epoll ( >> https://github.com/kr/beanstalkd/blob/1d191ba26b20f402cc8e8ee3a7f7b0de9f1ff78c/linux.c#L76) >> >> or kqueue ( >> https://github.com/kr/beanstalkd/blob/1d191ba26b20f402cc8e8ee3a7f7b0de9f1ff78c/darwin.c#L101) >> >> returns control to the application. >> >> I suspect that the observed reduction in performance is related to the >> way event scheduling is done in epoll/kqueue/kernel, versus beanstalk just >> polling the socket every 10ms. >> >> I don't have a solution to improve performance at this point, but at >> least you now know the reasoning behind the commit. >> >> Regards, >> >> Jacob >> >> >> On Thursday, August 1, 2013 11:28:14 AM UTC+2, [email protected] wrote: >>> >>> Hi, >>> >>> We've also experienced some performance issues with beanstalk 1.9. >>> I've created a similar test program using a C client. >>> You can check out the benchmark here: https://github.com/swehner/bsperf >>> The results are similar: >>> >>> === 1.9 === >>> Using port 11300, threads 20, queues 1500, socksperthread 75 each using >>> 666 jobs >>> Total time: 149.771615 s 6676.832589/s >>> >>> vs. >>> >>> === 1.9 reverting the adaptive epoll commit >>> (1d191ba26b20f402cc8e8ee3a7f7b0de9f1ff78c) === >>> Using port 11300, threads 20, queues 1500, socksperthread 75 each using >>> 666 jobs >>> Total time: 71.081487 s 14068.360725/s >>> >>> Hope this helps, >>> >>> Stefan >>> >>> On Tuesday, July 16, 2013 5:10:26 AM UTC+2, schmichael wrote: >>>> >>>> While I don't doubt there's been a slowdown, and your benchmarks are >>>> useful, the Trendrr client has some glaring bugs and strange code that >>>> make >>>> it less than an ideal test client. For example when reading a response >>>> from >>>> beanstalk it has the following strange loop: >>>> >>>> >>>> https://github.com/dustismo/TrendrrBeanstalk/blob/master/src/com/trendrr/beanstalk/BeanstalkConnection.java#L101-L114 >>>> >>>> This code sleeps 100ms if no data is available for reading on a socket >>>> (and erroneously logs "nothing to read for 100 seconds" after 10,000 100ms >>>> loops when really about 1000 seconds will have passed - but that's just a >>>> logging error and unlikely to be hit). This is a pretty strange and >>>> suboptimal way to do socket programming as they're basically using >>>> non-blocking sockets (SocketChannels) as blocking sockets. There are lots >>>> of similar oddities strewn throughout the code base. >>>> >>>> >>>> Sorry I don't have time to dig into the performance regression - just >>>> wanted to give you a heads up about your client of choice. >>>> >>>> On Wednesday, June 19, 2013 1:08:31 PM UTC-7, [email protected]: >>>>> >>>>> Here is some more information regarding the performance degradation I >>>>> see on beanstalk 1.9. >>>>> >>>>> Looking at the code, I suspect the culprit is this change >>>>> https://github.com/kr/beanstalkd/commit/1d191ba26b20f402cc8e8ee3a7f7b0de9f1ff78c >>>>> >>>>> Here are my numbers: >>>>> >>>>> Task 1.8 1.9 enqueue 600 items each in 3000 tubes 58.2 128.8 enqueue >>>>> 500,000 items in 1 tube, then dequeue 21.15 46.15 enqueue 500,000 >>>>> items in 1 tube, then dequeue (after enqueue 600 items each in 3000 >>>>> tubes) >>>>> 79.03 88.2 enqueue 600 items each in 3000 tubes then dequeue 152.13 >>>>> 313.55 >>>>> >>>>> >>>>> All timings are in seconds. Tests were run 3 times and averaged. >>>>> Raw numbers were consistent. >>>>> Tests were to a local beanstalkd on a 2013 macbook pro 15. >>>>> Separate connections for each tube. >>>>> >>>>> All connections left open for the duration in all cases. >>>>> >>>>> Work performed using 20 thread threadpools in java >>>>> >>>>> Job is: "This is a job" >>>>> >>>>> >>>>> Here is a gist of the java test class. It requires the Trendrr >>>>> beanstalk client and commons-logging. >>>>> https://gist.github.com/mattcross/5817503 >>>>> >>>>> If someone could comment on whether this problem is on anyone's radar >>>>> I'd really appreciate it. >>>>> >>>>> -matt >>>>> >>>>> P.S. I also see maxed out CPU for a much higher percentage of the >>>>> time on 1.9 but I have not characterized it. In some cases the CPU stays >>>>> high permanently after all queues have emptied but clients are still >>>>> connected. >>>>> >>>>
-- You received this message because you are subscribed to the Google Groups "beanstalk-talk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/beanstalk-talk. For more options, visit https://groups.google.com/groups/opt_out.
