Hi Clark, 

I did some more playing around with unicorn and puma. This time I used 
jRuby for puma and it's performance was really sweet. I tested locally with 
a test application, so YMMV:

http://ylan.segal-family.com/blog/2013/05/20/unicorn-vs-puma-redux/

On Friday, May 3, 2013 11:33:21 AM UTC-7, Clark Li wrote:
>
> Ylan, how many dynos were involved? Did you check how many thread/procs 
> were actually created by the server for unicorn and puma?
>
> I'm evaluating between unicorn and puma as well. It seems that puma has 
> the potential to spawn more workers per dyno, since threads are lower 
> overhead than procs.
>
> - Clark
>
> On Monday, August 20, 2012 11:42:23 AM UTC-7, Ylan wrote:
>>
>> OK, here is my follow up. 
>>
>> Indeed, I should have read httperf's manual more carefully, since I was 
>> not testing what I thought I was testing. 
>>
>> So, I now tried a different tool: siege. 
>>
>> I ran a few concurrency scenarios for the 3 servers using something like: 
>>
>> siege -c10 -t30s $URL 
>>
>> (Run 10 cuncurrent requests on $URL, for 30 seconds). 
>>
>> You can see a plot of the average response time for different servers and 
>> concurrency settings: 
>>
>> http://oi48.tinypic.com/65rjo7.jpg 
>>
>> My interpretation: I can squeeze better performance from my dynos by 
>> ditching thin for unicorn or puma. Unicorn seems to perform better than 
>> puma when using concurrency > 20. (As mentioned before puma and thin were 
>> using default configuration and unicorn is configured for 3 workers). All 
>> test were running on heroku cedar with MRI ruby 1.9.3. 
>>
>> Of course, this is mostly academic, since my website doesn't really have 
>> that much traffic, but I was interested in seeing performance differences 
>> by switching servers. 
>>
>> What is the group's take on this? 
>>
>> -- 
>> Ylan Segal 
>> [email protected] 
>> Tel: +1-858-224-7421 
>> Fax: +1-858-876-1799 
>>
>> On Aug 18, 2012, at 7:34 AM, Ylan Segal wrote: 
>>
>> > On Aug 17, 2012, at 6:17 PM, Matt Aimonetti <[email protected]> 
>> wrote: 
>> > 
>> >> you might want to run that from within EC2 to avoid network latency, 
>> then you might want to increase the concurrency (I believe you are 
>> currently only sending 1 request at a time). 
>> > 
>> > I misunderstood the httperf manual. I thought I had a concurrency of 
>> 10. That would explain why it didn't make much difference which server, 
>>  used. They were only processing 1 request a the time. 
>> > 
>> > I'll give it another try and post back. 
>> > 
>> >> Finally, if your code is waiting on IO for 45ms per request, this is 
>> where your bottleneck is, not in the web server. 
>> >> If that's the case, you want to try to increase the concurrency to see 
>> when you hit the maximum amount of requests per processes available. 
>> >> Thin is single threaded and blocking, so if your response if slow 
>> because of a DB call for instance, 1.9 + Puma should give you a better 
>> throughput. 
>> >> Unicorn should also be able to fork more processes to handle the load 
>> (depending on your settings and the available resources), Rainbows is also 
>> an alternative web server based on unicorn and meant for slower response 
>> times. 
>> > 
>> > Thanks. I'll look into rainbows as well. I also heard some news about 
>> Goliath. 
>> > 
>> >> (it looks like, if you are really hitting your server, 50ms from your 
>> home connection is quite a good response time tho) 
>> > 
>> > :) It is the homepage. I am caching the views, and made sure the caches 
>> were warm before running the test. 
>> > 
>> > -- 
>> > Ylan 
>> > 
>> > -- 
>> > SD Ruby mailing list 
>> > [email protected] 
>> > http://groups.google.com/group/sdruby 
>>
>>

-- 
-- 
SD Ruby mailing list
[email protected]
http://groups.google.com/group/sdruby
--- 
You received this message because you are subscribed to the Google Groups "SD 
Ruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to