Re: [akka-user] Enjoying Akka HTTP performance

2016-09-14 Thread Guido Medina
If you want to test Netty indirectly and have some reasonable API in the 
middle try comparing with Vert.x 3.
That way you are not just comparing against Netty but against a framework 
that is built on top of Netty,
has a promising HTTP API and is matured (also faster than NodeJS of course):

http://vertx.io/docs/vertx-core/java/#_creating_an_http_server

Regards,

Guido.

On Monday, September 12, 2016 at 2:09:36 PM UTC+1, Konrad Malawski wrote:
>
> That's very cool - thanks for posting these Christian!
> We didn't actually compare with Play, didn't have time somehow; We were 
> focused on beating Spray :-)
>
> Very cool to see we're on par with Netty Play.
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 12 September 2016 at 15:08:11, Christian Schmitt (
> c.sc...@briefdomain.de ) wrote:
>
> Reflog: 
>
> schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD
>
> c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git
>
> Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt: 
>>
>> it is actually wrk2: 
>>
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>>
>> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>>
>>
>> I compiled it on the mac against the homebrew openssl library.
>>
>> Actually I also thing that at something like 60k-70k packages my client 
>> network gear and the switch starts to fall behind (thats why the latency is 
>> so high).
>>
>> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √: 
>>>
>>> https://github.com/giltene/wrk2
>>>
>>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>>> c.sc...@briefdomain.de> wrote:
>>>
 extracted from my gist:

 akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
 -R120k http://192.168.179.157:3000
 Running 5m test @ http://192.168.179.157:3000
   2 threads and 100 connections
   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 
 2975ms
   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 
 2473ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency30.11s22.42s1.58m62.48%
 Req/Sec44.77k 4.77k   54.28k58.88%
   26888534 requests in 5.00m, 4.50GB read
 Requests/sec:  89628.49
 Transfer/sec: 15.37MB

 play with netty and native enabled (netty without native is exactly the 
 same as akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
 -R120k http://192.168.179.157:9000
 Running 5m test @ http://192.168.179.157:9000
   2 threads and 100 connections
   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 
 2504ms
   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 
 2562ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency28.14s18.49s1.32m61.39%
 Req/Sec46.78k 3.23k   51.52k52.63%
   28079997 requests in 5.00m, 4.02GB read
 Requests/sec:  93600.05
 Transfer/sec: 13.74MB

























 Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √: 
>
> What does wrk2 say?
>
> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
> c.sc...@briefdomain.de> wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair 
>> since play is quite high level. 
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just 
>> his project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski: 
>>>
>>>
>>>
>>> -- 
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the 
>>> high-level features Akka HTTP provides (routing, trivial back-pressured 
>>> entity streaming, fully typesafe http model) etc. 
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't 
>>> think it's the goal here – if you really want to write each and every 
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>>> nicest 
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1 
>>> with Akka HTTP here is the right way to look at it (yes, of course 
>>> we'll be 
>>> slower ;-)).

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
That's very cool - thanks for posting these Christian!
We didn't actually compare with Play, didn't have time somehow; We were
focused on beating Spray :-)

Very cool to see we're on par with Netty Play.

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 12 September 2016 at 15:08:11, Christian Schmitt (
c.schm...@briefdomain.de) wrote:

Reflog:

schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD

c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git

Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>
> it is actually wrk2:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>
> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>
>
> I compiled it on the mac against the homebrew openssl library.
>
> Actually I also thing that at something like 60k-70k packages my client
> network gear and the switch starts to fall behind (thats why the latency is
> so high).
>
> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>
>> https://github.com/giltene/wrk2
>>
>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> extracted from my gist:
>>>
>>> akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
>>> -R120k http://192.168.179.157:3000
>>> Running 5m test @ http://192.168.179.157:3000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval:
>>> 2975ms
>>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval:
>>> 2473ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency30.11s22.42s1.58m62.48%
>>> Req/Sec44.77k 4.77k   54.28k58.88%
>>>   26888534 requests in 5.00m, 4.50GB read
>>> Requests/sec:  89628.49
>>> Transfer/sec: 15.37MB
>>>
>>> play with netty and native enabled (netty without native is exactly the
>>> same as akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
>>> -R120k http://192.168.179.157:9000
>>> Running 5m test @ http://192.168.179.157:9000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval:
>>> 2504ms
>>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval:
>>> 2562ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency28.14s18.49s1.32m61.39%
>>> Req/Sec46.78k 3.23k   51.52k52.63%
>>>   28079997 requests in 5.00m, 4.02GB read
>>> Requests/sec:  93600.05
>>> Transfer/sec: 13.74MB
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:

 What does wrk2 say?

 On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
 c.sc...@briefdomain.de> wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair
> since play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just
> his project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't
>> think it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>> nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder
>> about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup
>>> and using wrk2 :-)
>>> Notice that the multiple second response times in node basically
>>> mean it's not keeping up and stalling the connections (also known as
>>> coordinated emission).
>>>
>>> It's great to see 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
Thanks for confirming :)

On Mon, Sep 12, 2016 at 3:08 PM, Christian Schmitt  wrote:

> Reflog:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD
>
> c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git
>
> Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>>
>> it is actually wrk2:
>>
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>>
>> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>>
>>
>> I compiled it on the mac against the homebrew openssl library.
>>
>> Actually I also thing that at something like 60k-70k packages my client
>> network gear and the switch starts to fall behind (thats why the latency is
>> so high).
>>
>> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>>
>>> https://github.com/giltene/wrk2
>>>
>>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>>> c.sc...@briefdomain.de> wrote:
>>>
 extracted from my gist:

 akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
 -R120k http://192.168.179.157:3000
 Running 5m test @ http://192.168.179.157:3000
   2 threads and 100 connections
   Thread calibration: mean lat.: 787.360ms, rate sampling interval:
 2975ms
   Thread calibration: mean lat.: 585.613ms, rate sampling interval:
 2473ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency30.11s22.42s1.58m62.48%
 Req/Sec44.77k 4.77k   54.28k58.88%
   26888534 requests in 5.00m, 4.50GB read
 Requests/sec:  89628.49
 Transfer/sec: 15.37MB

 play with netty and native enabled (netty without native is exactly the
 same as akka-http:
 schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
 -R120k http://192.168.179.157:9000
 Running 5m test @ http://192.168.179.157:9000
   2 threads and 100 connections
   Thread calibration: mean lat.: 625.068ms, rate sampling interval:
 2504ms
   Thread calibration: mean lat.: 696.276ms, rate sampling interval:
 2562ms
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency28.14s18.49s1.32m61.39%
 Req/Sec46.78k 3.23k   51.52k52.63%
   28079997 requests in 5.00m, 4.02GB read
 Requests/sec:  93600.05
 Transfer/sec: 13.74MB

























 Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>
> What does wrk2 say?
>
> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
> c.sc...@briefdomain.de> wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just
>> his project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> --
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the
>>> high-level features Akka HTTP provides (routing, trivial back-pressured
>>> entity streaming, fully typesafe http model) etc.
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't
>>> think it's the goal here – if you really want to write each and every
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>>> nicest
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1
>>> with Akka HTTP here is the right way to look at it (yes, of course 
>>> we'll be
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder
>>> about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad
>>> Malawski:

 Hi Adam,
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup
 and using wrk2 :-)
 Notice that the multiple second response times in node basically
 mean it's not keeping up and stalling the connections (also known as
 coordinated emission).

 It's great to see such side by side with node, thanks for sharing
 it again.
 Happy hakking!

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
Reflog:

schmitch@deployster:~/projects/schmitch/wrk2$ git reflog HEAD

c4250ac HEAD@{0}: clone: from https://github.com/giltene/wrk2.git

Am Montag, 12. September 2016 15:07:08 UTC+2 schrieb Christian Schmitt:
>
> it is actually wrk2:
>
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version
>
> wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
>
>
> I compiled it on the mac against the homebrew openssl library.
>
> Actually I also thing that at something like 60k-70k packages my client 
> network gear and the switch starts to fall behind (thats why the latency is 
> so high).
>
> Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>>
>> https://github.com/giltene/wrk2
>>
>> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> extracted from my gist:
>>>
>>> akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>>> -R120k http://192.168.179.157:3000
>>> Running 5m test @ http://192.168.179.157:3000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 
>>> 2975ms
>>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 
>>> 2473ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency30.11s22.42s1.58m62.48%
>>> Req/Sec44.77k 4.77k   54.28k58.88%
>>>   26888534 requests in 5.00m, 4.50GB read
>>> Requests/sec:  89628.49
>>> Transfer/sec: 15.37MB
>>>
>>> play with netty and native enabled (netty without native is exactly the 
>>> same as akka-http:
>>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>>> -R120k http://192.168.179.157:9000
>>> Running 5m test @ http://192.168.179.157:9000
>>>   2 threads and 100 connections
>>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 
>>> 2504ms
>>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 
>>> 2562ms
>>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>>> Latency28.14s18.49s1.32m61.39%
>>> Req/Sec46.78k 3.23k   51.52k52.63%
>>>   28079997 requests in 5.00m, 4.02GB read
>>> Requests/sec:  93600.05
>>> Transfer/sec: 13.74MB
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:

 What does wrk2 say?

 On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
 c.sc...@briefdomain.de> wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair 
> since play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just 
> his project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> -- 
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the 
>> high-level features Akka HTTP provides (routing, trivial back-pressured 
>> entity streaming, fully typesafe http model) etc. 
>>
>> It's a fun experiment to see how much faster Netty is, but I don't 
>> think it's the goal here – if you really want to write each and every 
>> microservice with raw Netty APIs–enjoy, but I don't think that's the 
>> nicest 
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with 
>> Akka HTTP here is the right way to look at it (yes, of course we'll be 
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder 
>> about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>>
>>> Hi Adam, 
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup 
>>> and using wrk2 :-)
>>> Notice that the multiple second response times in node basically 
>>> mean it's not keeping up and stalling the connections (also known as 
>>> coordinated emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it 
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance 
 in 2.4.10.
 I'm diagnosing some low level Node.js 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
it is actually wrk2:

schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk --version

wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer


I compiled it on the mac against the homebrew openssl library.

Actually I also thing that at something like 60k-70k packages my client 
network gear and the switch starts to fall behind (thats why the latency is 
so high).

Am Montag, 12. September 2016 15:01:40 UTC+2 schrieb √:
>
> https://github.com/giltene/wrk2
>
> On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt  > wrote:
>
>> extracted from my gist:
>>
>> akka-http:
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>> -R120k http://192.168.179.157:3000
>> Running 5m test @ http://192.168.179.157:3000
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
>>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency30.11s22.42s1.58m62.48%
>> Req/Sec44.77k 4.77k   54.28k58.88%
>>   26888534 requests in 5.00m, 4.50GB read
>> Requests/sec:  89628.49
>> Transfer/sec: 15.37MB
>>
>> play with netty and native enabled (netty without native is exactly the 
>> same as akka-http:
>> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s 
>> -R120k http://192.168.179.157:9000
>> Running 5m test @ http://192.168.179.157:9000
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
>>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency28.14s18.49s1.32m61.39%
>> Req/Sec46.78k 3.23k   51.52k52.63%
>>   28079997 requests in 5.00m, 4.02GB read
>> Requests/sec:  93600.05
>> Transfer/sec: 13.74MB
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>>>
>>> What does wrk2 say?
>>>
>>> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
>>> c.sc...@briefdomain.de> wrote:
>>>
 I just compared Playframework on Netty vs Akka-http guess thats fair 
 since play is quite high level.

 Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
 https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
 Projects: https://github.com/schmitch/performance (akka-http is just 
 his project + @volatile on the var)

 Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>
>
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 12 September 2016 at 12:56:46, Christian Schmitt (
> c.sc...@briefdomain.de) wrote:
>
> actually wouldn't it be more reasonable to try it against netty?
>
> Yes and no. Then one should compare raw IO APIs, and none of the 
> high-level features Akka HTTP provides (routing, trivial back-pressured 
> entity streaming, fully typesafe http model) etc. 
>
> It's a fun experiment to see how much faster Netty is, but I don't 
> think it's the goal here – if you really want to write each and every 
> microservice with raw Netty APIs–enjoy, but I don't think that's the 
> nicest 
> API to just bang out a service in 4 minutes :)
>
> (Note, much love for Netty here, but I don't think comparing 1:1 with 
> Akka HTTP here is the right way to look at it (yes, of course we'll be 
> slower ;-)).
>
>
> I mean that node is slower than akka-http isn't something I wonder 
> about.
>
> You'd be surprised what node people claim about its performance ;-)
>
> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>
>> Hi Adam, 
>> thanks for sharing the runs!
>> Your benchmarking method is good - thanks for doing a proper warmup 
>> and using wrk2 :-)
>> Notice that the multiple second response times in node basically mean 
>> it's not keeping up and stalling the connections (also known as 
>> coordinated 
>> emission).
>>
>> It's great to see such side by side with node, thanks for sharing it 
>> again.
>> Happy hakking!
>>
>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>
>>> Hi,
>>>
>>> I'd just like to share my satisfaction from Akka HTTP performance in 
>>> 2.4.10.
>>> I'm diagnosing some low level Node.js performance issues and while 
>>> running various tests that only require the most basic "Hello World" 
>>> style 
>>> code, I decided to take a few minutes to check how would Akka HTTP 
>>> handle 
>>> the same work.
>>> I was quite impressed with the results, so I thought I'd share.
>>>
>>> I'm running two c4.large instances (so two cores on each 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
https://github.com/giltene/wrk2

On Mon, Sep 12, 2016 at 2:59 PM, Christian Schmitt  wrote:

> extracted from my gist:
>
> akka-http:
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
> -R120k http://192.168.179.157:3000
> Running 5m test @ http://192.168.179.157:3000
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
>   Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency30.11s22.42s1.58m62.48%
> Req/Sec44.77k 4.77k   54.28k58.88%
>   26888534 requests in 5.00m, 4.50GB read
> Requests/sec:  89628.49
> Transfer/sec: 15.37MB
>
> play with netty and native enabled (netty without native is exactly the
> same as akka-http:
> schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s
> -R120k http://192.168.179.157:9000
> Running 5m test @ http://192.168.179.157:9000
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
>   Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency28.14s18.49s1.32m61.39%
> Req/Sec46.78k 3.23k   51.52k52.63%
>   28079997 requests in 5.00m, 4.02GB read
> Requests/sec:  93600.05
> Transfer/sec: 13.74MB
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>>
>> What does wrk2 say?
>>
>> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt <
>> c.sc...@briefdomain.de> wrote:
>>
>>> I just compared Playframework on Netty vs Akka-http guess thats fair
>>> since play is quite high level.
>>>
>>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>>> Projects: https://github.com/schmitch/performance (akka-http is just
>>> his project + @volatile on the var)
>>>
>>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:



 --
 Konrad `ktoso` Malawski
 Akka  @ Lightbend 

 On 12 September 2016 at 12:56:46, Christian Schmitt (
 c.sc...@briefdomain.de) wrote:

 actually wouldn't it be more reasonable to try it against netty?

 Yes and no. Then one should compare raw IO APIs, and none of the
 high-level features Akka HTTP provides (routing, trivial back-pressured
 entity streaming, fully typesafe http model) etc.

 It's a fun experiment to see how much faster Netty is, but I don't
 think it's the goal here – if you really want to write each and every
 microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
 API to just bang out a service in 4 minutes :)

 (Note, much love for Netty here, but I don't think comparing 1:1 with
 Akka HTTP here is the right way to look at it (yes, of course we'll be
 slower ;-)).


 I mean that node is slower than akka-http isn't something I wonder
 about.

 You'd be surprised what node people claim about its performance ;-)

 Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup
> and using wrk2 :-)
> Notice that the multiple second response times in node basically mean
> it's not keeping up and stalling the connections (also known as 
> coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it
> again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" 
>> style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) -
>> one running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a
>> restart in between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
extracted from my gist:

akka-http:
schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s -R120k 
http://192.168.179.157:3000
Running 5m test @ http://192.168.179.157:3000
  2 threads and 100 connections
  Thread calibration: mean lat.: 787.360ms, rate sampling interval: 2975ms
  Thread calibration: mean lat.: 585.613ms, rate sampling interval: 2473ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency30.11s22.42s1.58m62.48%
Req/Sec44.77k 4.77k   54.28k58.88%
  26888534 requests in 5.00m, 4.50GB read
Requests/sec:  89628.49
Transfer/sec: 15.37MB

play with netty and native enabled (netty without native is exactly the 
same as akka-http:
schmitch@deployster:~/projects/schmitch/wrk2$ ./wrk -t2 -c100 -d300s -R120k 
http://192.168.179.157:9000
Running 5m test @ http://192.168.179.157:9000
  2 threads and 100 connections
  Thread calibration: mean lat.: 625.068ms, rate sampling interval: 2504ms
  Thread calibration: mean lat.: 696.276ms, rate sampling interval: 2562ms
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency28.14s18.49s1.32m61.39%
Req/Sec46.78k 3.23k   51.52k52.63%
  28079997 requests in 5.00m, 4.02GB read
Requests/sec:  93600.05
Transfer/sec: 13.74MB

























Am Montag, 12. September 2016 14:52:48 UTC+2 schrieb √:
>
> What does wrk2 say?
>
> On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt  > wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair 
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just his 
>> project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> -- 
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the 
>>> high-level features Akka HTTP provides (routing, trivial back-pressured 
>>> entity streaming, fully typesafe http model) etc. 
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't think 
>>> it's the goal here – if you really want to write each and every 
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest 
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1 with 
>>> Akka HTTP here is the right way to look at it (yes, of course we'll be 
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 

 Hi Adam, 
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup and 
 using wrk2 :-)
 Notice that the multiple second response times in node basically mean 
 it's not keeping up and stalling the connections (also known as 
 coordinated 
 emission).

 It's great to see such side by side with node, thanks for sharing it 
 again.
 Happy hakking!

 On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in 
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while 
> running various tests that only require the most basic "Hello World" 
> style 
> code, I decided to take a few minutes to check how would Akka HTTP handle 
> the same work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) - 
> one running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>requests/sec for 5 minutes 
>
> In both cases, the tests are actually executed twice without a restart 
> in between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do 
> their thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for 
> a more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
What does wrk2 say?

On Mon, Sep 12, 2016 at 2:37 PM, Christian Schmitt  wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair since
> play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just his
> project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't think
>> it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup and
>>> using wrk2 :-)
>>> Notice that the multiple second response times in node basically mean
>>> it's not keeping up and stalling the connections (also known as coordinated
>>> emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance in
 2.4.10.
 I'm diagnosing some low level Node.js performance issues and while
 running various tests that only require the most basic "Hello World" style
 code, I decided to take a few minutes to check how would Akka HTTP handle
 the same work.
 I was quite impressed with the results, so I thought I'd share.

 I'm running two c4.large instances (so two cores on each instance) -
 one running the HTTP service and another running wrk2.
 I've tested only two short sets (seeing as I have other work to do):

1. use 2 threads to simulate 100 concurrent users pushing 2k
requests/sec for 5 minutes
2. use 2 threads to simulate 100 concurrent users pushing 20k
requests/sec for 5 minutes

 In both cases, the tests are actually executed twice without a restart
 in between and I throw away the results of the first run.

 The first run is just to get JIT and other adaptive mechanisms to do
 their thing.

 5 minutes seems to be enough based on the CPU behavior I see, but for a
 more "official" test I'd probably use something longer.


 As for the code, I was using vanilla Node code - the kind you see as
 the most basic example (no web frameworks or anything) but for Akka, I used
 the high level DSL.


 Here's the Code:


 *Akka HTTP*


 package com.example.rest

 import akka.actor.ActorSystem
 import akka.http.scaladsl.Http
 import akka.http.scaladsl.server.Directives._
 import akka.stream.ActorMaterializer


 case class Reply(message: String = "Hello World", userCount: Int)

 object MyJsonProtocol
   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
 with spray.json.DefaultJsonProtocol {

   implicit val replyFormat = jsonFormat2(Reply.apply)
 }

 object FullWebServer {
   var userCount = 0;

   def getReply() = {
 userCount += 1
 Reply(userCount=userCount)
   }

   def main(args: Array[String]) {
 implicit val system = ActorSystem()
 implicit val materializer = ActorMaterializer()
 import MyJsonProtocol._

 val route =
   get {
 complete(getReply())
   }

 // `route` will be implicitly converted to `Flow` using 
 `RouteResult.route2HandlerFlow`
 val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
 println("Server online at http://127.0.0.1:3000/;)
   }
 }


 *Node*

 var http = require('http');

 let userCount = 0;
 var server = http.createServer(function (request, 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
That would've been a good comment on that line of code :)

On Mon, Sep 12, 2016 at 2:45 PM, אדם חונן  wrote:

> In my original code I really didn't care about that value or it's validity.
> The only thing I wanted to achieve was different JSON messages like in
> Node where, BTW, this variables exists twice - once per process.
> If you really need to share mutable state Node is already out of the
> conversation...
>
> On Mon, Sep 12, 2016 at 3:40 PM, Viktor Klang 
> wrote:
>
>> @volatile on the var will not really help, += is not an atomic
>> instruction.
>>
>> --
>> Cheers,
>> √
>>
>> On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
>> wrote:
>>
>>> I just compared Playframework on Netty vs Akka-http guess thats fair
>>> since play is quite high level.
>>>
>>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>>> Projects: https://github.com/schmitch/performance (akka-http is just
>>> his project + @volatile on the var)
>>>
>>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:



 --
 Konrad `ktoso` Malawski
 Akka  @ Lightbend 

 On 12 September 2016 at 12:56:46, Christian Schmitt (
 c.sc...@briefdomain.de) wrote:

 actually wouldn't it be more reasonable to try it against netty?

 Yes and no. Then one should compare raw IO APIs, and none of the
 high-level features Akka HTTP provides (routing, trivial back-pressured
 entity streaming, fully typesafe http model) etc.

 It's a fun experiment to see how much faster Netty is, but I don't
 think it's the goal here – if you really want to write each and every
 microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
 API to just bang out a service in 4 minutes :)

 (Note, much love for Netty here, but I don't think comparing 1:1 with
 Akka HTTP here is the right way to look at it (yes, of course we'll be
 slower ;-)).


 I mean that node is slower than akka-http isn't something I wonder
 about.

 You'd be surprised what node people claim about its performance ;-)

 Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup
> and using wrk2 :-)
> Notice that the multiple second response times in node basically mean
> it's not keeping up and stalling the connections (also known as 
> coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it
> again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" 
>> style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) -
>> one running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a
>> restart in between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for
>> a more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as
>> the most basic example (no web frameworks or anything) but for Akka, I 
>> used
>> the high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread אדם חונן
In my original code I really didn't care about that value or it's validity.
The only thing I wanted to achieve was different JSON messages like in Node
where, BTW, this variables exists twice - once per process.
If you really need to share mutable state Node is already out of the
conversation...

On Mon, Sep 12, 2016 at 3:40 PM, Viktor Klang 
wrote:

> @volatile on the var will not really help, += is not an atomic instruction.
>
> --
> Cheers,
> √
>
> On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
> wrote:
>
>> I just compared Playframework on Netty vs Akka-http guess thats fair
>> since play is quite high level.
>>
>> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
>> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
>> Projects: https://github.com/schmitch/performance (akka-http is just his
>> project + @volatile on the var)
>>
>> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>>
>>>
>>>
>>> --
>>> Konrad `ktoso` Malawski
>>> Akka  @ Lightbend 
>>>
>>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>>> c.sc...@briefdomain.de) wrote:
>>>
>>> actually wouldn't it be more reasonable to try it against netty?
>>>
>>> Yes and no. Then one should compare raw IO APIs, and none of the
>>> high-level features Akka HTTP provides (routing, trivial back-pressured
>>> entity streaming, fully typesafe http model) etc.
>>>
>>> It's a fun experiment to see how much faster Netty is, but I don't think
>>> it's the goal here – if you really want to write each and every
>>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>>> API to just bang out a service in 4 minutes :)
>>>
>>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>>> slower ;-)).
>>>
>>>
>>> I mean that node is slower than akka-http isn't something I wonder about.
>>>
>>> You'd be surprised what node people claim about its performance ;-)
>>>
>>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:

 Hi Adam,
 thanks for sharing the runs!
 Your benchmarking method is good - thanks for doing a proper warmup and
 using wrk2 :-)
 Notice that the multiple second response times in node basically mean
 it's not keeping up and stalling the connections (also known as coordinated
 emission).

 It's great to see such side by side with node, thanks for sharing it
 again.
 Happy hakking!

 On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while
> running various tests that only require the most basic "Hello World" style
> code, I decided to take a few minutes to check how would Akka HTTP handle
> the same work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) -
> one running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k
>requests/sec for 5 minutes
>
> In both cases, the tests are actually executed twice without a restart
> in between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do
> their thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for
> a more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as
> the most basic example (no web frameworks or anything) but for Akka, I 
> used
> the high level DSL.
>
>
> Here's the Code:
>
>
> *Akka HTTP*
>
>
> package com.example.rest
>
> import akka.actor.ActorSystem
> import akka.http.scaladsl.Http
> import akka.http.scaladsl.server.Directives._
> import akka.stream.ActorMaterializer
>
>
> case class Reply(message: String = "Hello World", userCount: Int)
>
> object MyJsonProtocol
>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
> with spray.json.DefaultJsonProtocol {
>
>   implicit val replyFormat = jsonFormat2(Reply.apply)
> }
>
> object FullWebServer {
>   var userCount = 0;
>
>   def getReply() = {
> userCount += 1
> Reply(userCount=userCount)
>   }
>
>   def main(args: Array[String]) {
> implicit val system = 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
@volatile on the var will not really help, += is not an atomic instruction.

-- 
Cheers,
√

On Sep 12, 2016 2:37 PM, "Christian Schmitt" 
wrote:

> I just compared Playframework on Netty vs Akka-http guess thats fair since
> play is quite high level.
>
> Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM):
> https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
> Projects: https://github.com/schmitch/performance (akka-http is just his
> project + @volatile on the var)
>
> Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>>
>>
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka  @ Lightbend 
>>
>> On 12 September 2016 at 12:56:46, Christian Schmitt (
>> c.sc...@briefdomain.de) wrote:
>>
>> actually wouldn't it be more reasonable to try it against netty?
>>
>> Yes and no. Then one should compare raw IO APIs, and none of the
>> high-level features Akka HTTP provides (routing, trivial back-pressured
>> entity streaming, fully typesafe http model) etc.
>>
>> It's a fun experiment to see how much faster Netty is, but I don't think
>> it's the goal here – if you really want to write each and every
>> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
>> API to just bang out a service in 4 minutes :)
>>
>> (Note, much love for Netty here, but I don't think comparing 1:1 with
>> Akka HTTP here is the right way to look at it (yes, of course we'll be
>> slower ;-)).
>>
>>
>> I mean that node is slower than akka-http isn't something I wonder about.
>>
>> You'd be surprised what node people claim about its performance ;-)
>>
>> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>>>
>>> Hi Adam,
>>> thanks for sharing the runs!
>>> Your benchmarking method is good - thanks for doing a proper warmup and
>>> using wrk2 :-)
>>> Notice that the multiple second response times in node basically mean
>>> it's not keeping up and stalling the connections (also known as coordinated
>>> emission).
>>>
>>> It's great to see such side by side with node, thanks for sharing it
>>> again.
>>> Happy hakking!
>>>
>>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>>
 Hi,

 I'd just like to share my satisfaction from Akka HTTP performance in
 2.4.10.
 I'm diagnosing some low level Node.js performance issues and while
 running various tests that only require the most basic "Hello World" style
 code, I decided to take a few minutes to check how would Akka HTTP handle
 the same work.
 I was quite impressed with the results, so I thought I'd share.

 I'm running two c4.large instances (so two cores on each instance) -
 one running the HTTP service and another running wrk2.
 I've tested only two short sets (seeing as I have other work to do):

1. use 2 threads to simulate 100 concurrent users pushing 2k
requests/sec for 5 minutes
2. use 2 threads to simulate 100 concurrent users pushing 20k
requests/sec for 5 minutes

 In both cases, the tests are actually executed twice without a restart
 in between and I throw away the results of the first run.

 The first run is just to get JIT and other adaptive mechanisms to do
 their thing.

 5 minutes seems to be enough based on the CPU behavior I see, but for a
 more "official" test I'd probably use something longer.


 As for the code, I was using vanilla Node code - the kind you see as
 the most basic example (no web frameworks or anything) but for Akka, I used
 the high level DSL.


 Here's the Code:


 *Akka HTTP*


 package com.example.rest

 import akka.actor.ActorSystem
 import akka.http.scaladsl.Http
 import akka.http.scaladsl.server.Directives._
 import akka.stream.ActorMaterializer


 case class Reply(message: String = "Hello World", userCount: Int)

 object MyJsonProtocol
   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
 with spray.json.DefaultJsonProtocol {

   implicit val replyFormat = jsonFormat2(Reply.apply)
 }

 object FullWebServer {
   var userCount = 0;

   def getReply() = {
 userCount += 1
 Reply(userCount=userCount)
   }

   def main(args: Array[String]) {
 implicit val system = ActorSystem()
 implicit val materializer = ActorMaterializer()
 import MyJsonProtocol._

 val route =
   get {
 complete(getReply())
   }

 // `route` will be implicitly converted to `Flow` using 
 `RouteResult.route2HandlerFlow`
 val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
 println("Server online at http://127.0.0.1:3000/;)
   }
 }


 *Node*

 var http = require('http');

 let 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
I just compared Playframework on Netty vs Akka-http guess thats fair since 
play is quite high level.

Performance for 2k, 20k, 120k Req/s (2k was used to warmup the VM): 
https://gist.github.com/schmitch/2ca3359bc34560c6063d0b00eb0a7aac
Projects: https://github.com/schmitch/performance (akka-http is just his 
project + @volatile on the var)

Am Montag, 12. September 2016 13:04:29 UTC+2 schrieb Konrad Malawski:
>
>
>
> -- 
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 12 September 2016 at 12:56:46, Christian Schmitt (
> c.sc...@briefdomain.de ) wrote:
>
> actually wouldn't it be more reasonable to try it against netty?
>
> Yes and no. Then one should compare raw IO APIs, and none of the 
> high-level features Akka HTTP provides (routing, trivial back-pressured 
> entity streaming, fully typesafe http model) etc. 
>
> It's a fun experiment to see how much faster Netty is, but I don't think 
> it's the goal here – if you really want to write each and every 
> microservice with raw Netty APIs–enjoy, but I don't think that's the nicest 
> API to just bang out a service in 4 minutes :)
>
> (Note, much love for Netty here, but I don't think comparing 1:1 with Akka 
> HTTP here is the right way to look at it (yes, of course we'll be slower 
> ;-)).
>
>
> I mean that node is slower than akka-http isn't something I wonder about.
>
> You'd be surprised what node people claim about its performance ;-)
>
> Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski: 
>>
>> Hi Adam, 
>> thanks for sharing the runs!
>> Your benchmarking method is good - thanks for doing a proper warmup and 
>> using wrk2 :-)
>> Notice that the multiple second response times in node basically mean 
>> it's not keeping up and stalling the connections (also known as coordinated 
>> emission).
>>
>> It's great to see such side by side with node, thanks for sharing it 
>> again.
>> Happy hakking!
>>
>> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>>
>>> Hi,
>>>
>>> I'd just like to share my satisfaction from Akka HTTP performance in 
>>> 2.4.10.
>>> I'm diagnosing some low level Node.js performance issues and while 
>>> running various tests that only require the most basic "Hello World" style 
>>> code, I decided to take a few minutes to check how would Akka HTTP handle 
>>> the same work.
>>> I was quite impressed with the results, so I thought I'd share.
>>>
>>> I'm running two c4.large instances (so two cores on each instance) - one 
>>> running the HTTP service and another running wrk2.
>>> I've tested only two short sets (seeing as I have other work to do):
>>>
>>>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>>>requests/sec for 5 minutes
>>>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>>>requests/sec for 5 minutes 
>>>
>>> In both cases, the tests are actually executed twice without a restart 
>>> in between and I throw away the results of the first run.
>>>
>>> The first run is just to get JIT and other adaptive mechanisms to do 
>>> their thing.
>>>
>>> 5 minutes seems to be enough based on the CPU behavior I see, but for a 
>>> more "official" test I'd probably use something longer.
>>>
>>>
>>> As for the code, I was using vanilla Node code - the kind you see as the 
>>> most basic example (no web frameworks or anything) but for Akka, I used the 
>>> high level DSL.
>>>
>>>
>>> Here's the Code:
>>>
>>>
>>> *Akka HTTP*
>>>
>>>
>>> package com.example.rest
>>>
>>> import akka.actor.ActorSystem
>>> import akka.http.scaladsl.Http
>>> import akka.http.scaladsl.server.Directives._
>>> import akka.stream.ActorMaterializer
>>>
>>>
>>> case class Reply(message: String = "Hello World", userCount: Int)
>>>
>>> object MyJsonProtocol
>>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>>> with spray.json.DefaultJsonProtocol {
>>>
>>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>>> }
>>>
>>> object FullWebServer {
>>>   var userCount = 0;
>>>
>>>   def getReply() = {
>>> userCount += 1
>>> Reply(userCount=userCount)
>>>   }
>>>
>>>   def main(args: Array[String]) {
>>> implicit val system = ActorSystem()
>>> implicit val materializer = ActorMaterializer()
>>> import MyJsonProtocol._
>>>
>>> val route =
>>>   get {
>>> complete(getReply())
>>>   }
>>>
>>> // `route` will be implicitly converted to `Flow` using 
>>> `RouteResult.route2HandlerFlow`
>>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>>> println("Server online at http://127.0.0.1:3000/;)
>>>   }
>>> }
>>>
>>>
>>> *Node*
>>>
>>> var http = require('http');
>>>
>>> let userCount = 0;
>>> var server = http.createServer(function (request, response) {
>>> userCount++;
>>> response.writeHead(200, {"Content-Type": "application/json"});
>>> const hello = {msg: "Hello world", userCount: userCount};
>>> response.end(JSON.stringify(hello));
>>> });
>>>

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 12 September 2016 at 12:56:46, Christian Schmitt (
c.schm...@briefdomain.de) wrote:

actually wouldn't it be more reasonable to try it against netty?

Yes and no. Then one should compare raw IO APIs, and none of the high-level
features Akka HTTP provides (routing, trivial back-pressured entity
streaming, fully typesafe http model) etc.

It's a fun experiment to see how much faster Netty is, but I don't think
it's the goal here – if you really want to write each and every
microservice with raw Netty APIs–enjoy, but I don't think that's the nicest
API to just bang out a service in 4 minutes :)

(Note, much love for Netty here, but I don't think comparing 1:1 with Akka
HTTP here is the right way to look at it (yes, of course we'll be slower
;-)).


I mean that node is slower than akka-http isn't something I wonder about.

You'd be surprised what node people claim about its performance ;-)

Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's
> not keeping up and stalling the connections (also known as coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam 
> wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the
>> most basic example (no web frameworks or anything) but for Akka, I used the
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Christian Schmitt
actually wouldn't it be more reasonable to try it against netty?
I mean that node is slower than akka-http isn't something I wonder about.

Am Montag, 12. September 2016 12:12:29 UTC+2 schrieb Konrad Malawski:
>
> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and 
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's 
> not keeping up and stalling the connections (also known as coordinated 
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  
> wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in 
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while 
>> running various tests that only require the most basic "Hello World" style 
>> code, I decided to take a few minutes to check how would Akka HTTP handle 
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one 
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k 
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k 
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in 
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do 
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a 
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the 
>> most basic example (no web frameworks or anything) but for Akka, I used the 
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses 
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response 
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 5.31s 4.48s   16.66s65.79%
>> Req/Sec 9.70k 0.87k   10.86k57.85%
>>   5806492 requests in 5.00m, 1.01GB read
>> Requests/sec:  19354.95
>> Transfer/sec:  3.43MB
>>
>>
>> Whereas for the Akka HTTP version I saw each core using ~40% CPU 
>> throughout the test and I had the following results:
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>>   

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Viktor Klang
Cool! (you may want to use an AtomicInteger to generate unique sequence
numbers)

On Mon, Sep 12, 2016 at 12:12 PM, Konrad Malawski 
wrote:

> Hi Adam,
> thanks for sharing the runs!
> Your benchmarking method is good - thanks for doing a proper warmup and
> using wrk2 :-)
> Notice that the multiple second response times in node basically mean it's
> not keeping up and stalling the connections (also known as coordinated
> emission).
>
> It's great to see such side by side with node, thanks for sharing it again.
> Happy hakking!
>
> On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:
>
>> Hi,
>>
>> I'd just like to share my satisfaction from Akka HTTP performance in
>> 2.4.10.
>> I'm diagnosing some low level Node.js performance issues and while
>> running various tests that only require the most basic "Hello World" style
>> code, I decided to take a few minutes to check how would Akka HTTP handle
>> the same work.
>> I was quite impressed with the results, so I thought I'd share.
>>
>> I'm running two c4.large instances (so two cores on each instance) - one
>> running the HTTP service and another running wrk2.
>> I've tested only two short sets (seeing as I have other work to do):
>>
>>1. use 2 threads to simulate 100 concurrent users pushing 2k
>>requests/sec for 5 minutes
>>2. use 2 threads to simulate 100 concurrent users pushing 20k
>>requests/sec for 5 minutes
>>
>> In both cases, the tests are actually executed twice without a restart in
>> between and I throw away the results of the first run.
>>
>> The first run is just to get JIT and other adaptive mechanisms to do
>> their thing.
>>
>> 5 minutes seems to be enough based on the CPU behavior I see, but for a
>> more "official" test I'd probably use something longer.
>>
>>
>> As for the code, I was using vanilla Node code - the kind you see as the
>> most basic example (no web frameworks or anything) but for Akka, I used the
>> high level DSL.
>>
>>
>> Here's the Code:
>>
>>
>> *Akka HTTP*
>>
>>
>> package com.example.rest
>>
>> import akka.actor.ActorSystem
>> import akka.http.scaladsl.Http
>> import akka.http.scaladsl.server.Directives._
>> import akka.stream.ActorMaterializer
>>
>>
>> case class Reply(message: String = "Hello World", userCount: Int)
>>
>> object MyJsonProtocol
>>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
>> with spray.json.DefaultJsonProtocol {
>>
>>   implicit val replyFormat = jsonFormat2(Reply.apply)
>> }
>>
>> object FullWebServer {
>>   var userCount = 0;
>>
>>   def getReply() = {
>> userCount += 1
>> Reply(userCount=userCount)
>>   }
>>
>>   def main(args: Array[String]) {
>> implicit val system = ActorSystem()
>> implicit val materializer = ActorMaterializer()
>> import MyJsonProtocol._
>>
>> val route =
>>   get {
>> complete(getReply())
>>   }
>>
>> // `route` will be implicitly converted to `Flow` using 
>> `RouteResult.route2HandlerFlow`
>> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
>> println("Server online at http://127.0.0.1:3000/;)
>>   }
>> }
>>
>>
>> *Node*
>>
>> var http = require('http');
>>
>> let userCount = 0;
>> var server = http.createServer(function (request, response) {
>> userCount++;
>> response.writeHead(200, {"Content-Type": "application/json"});
>> const hello = {msg: "Hello world", userCount: userCount};
>> response.end(JSON.stringify(hello));
>> });
>>
>> server.listen(3000);
>>
>> console.log("Server running at http://127.0.0.1:3000/;);
>>
>> (to be more exact there's also some wrapping code because I'm running this 
>> in a cluster so all cores can be utilized)
>>
>>
>> So for the first test, things are pretty much the same - Akka HTTP uses
>> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
>> time, but a higher max response time.
>>
>> Not very interesting.
>>
>>
>> The second test was more one sided though.
>>
>>
>> The Node version maxed out the CPU and got the following results:
>>
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 5.31s 4.48s   16.66s65.79%
>> Req/Sec 9.70k 0.87k   10.86k57.85%
>>   5806492 requests in 5.00m, 1.01GB read
>> Requests/sec:  19354.95
>> Transfer/sec:  3.43MB
>>
>>
>> Whereas for the Akka HTTP version I saw each core using ~40% CPU
>> throughout the test and I had the following results:
>>
>> Running 5m test @ http://srv-02:3000/
>>   2 threads and 100 connections
>>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>>   Thread Stats   Avg  Stdev Max   +/- Stdev
>> Latency 

Re: [akka-user] Enjoying Akka HTTP performance

2016-09-12 Thread Konrad Malawski
Hi Adam,
thanks for sharing the runs!
Your benchmarking method is good - thanks for doing a proper warmup and
using wrk2 :-)
Notice that the multiple second response times in node basically mean it's
not keeping up and stalling the connections (also known as coordinated
emission).

It's great to see such side by side with node, thanks for sharing it again.
Happy hakking!

On Mon, Sep 12, 2016 at 10:33 AM, Adam  wrote:

> Hi,
>
> I'd just like to share my satisfaction from Akka HTTP performance in
> 2.4.10.
> I'm diagnosing some low level Node.js performance issues and while running
> various tests that only require the most basic "Hello World" style code, I
> decided to take a few minutes to check how would Akka HTTP handle the same
> work.
> I was quite impressed with the results, so I thought I'd share.
>
> I'm running two c4.large instances (so two cores on each instance) - one
> running the HTTP service and another running wrk2.
> I've tested only two short sets (seeing as I have other work to do):
>
>1. use 2 threads to simulate 100 concurrent users pushing 2k
>requests/sec for 5 minutes
>2. use 2 threads to simulate 100 concurrent users pushing 20k
>requests/sec for 5 minutes
>
> In both cases, the tests are actually executed twice without a restart in
> between and I throw away the results of the first run.
>
> The first run is just to get JIT and other adaptive mechanisms to do their
> thing.
>
> 5 minutes seems to be enough based on the CPU behavior I see, but for a
> more "official" test I'd probably use something longer.
>
>
> As for the code, I was using vanilla Node code - the kind you see as the
> most basic example (no web frameworks or anything) but for Akka, I used the
> high level DSL.
>
>
> Here's the Code:
>
>
> *Akka HTTP*
>
>
> package com.example.rest
>
> import akka.actor.ActorSystem
> import akka.http.scaladsl.Http
> import akka.http.scaladsl.server.Directives._
> import akka.stream.ActorMaterializer
>
>
> case class Reply(message: String = "Hello World", userCount: Int)
>
> object MyJsonProtocol
>   extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
> with spray.json.DefaultJsonProtocol {
>
>   implicit val replyFormat = jsonFormat2(Reply.apply)
> }
>
> object FullWebServer {
>   var userCount = 0;
>
>   def getReply() = {
> userCount += 1
> Reply(userCount=userCount)
>   }
>
>   def main(args: Array[String]) {
> implicit val system = ActorSystem()
> implicit val materializer = ActorMaterializer()
> import MyJsonProtocol._
>
> val route =
>   get {
> complete(getReply())
>   }
>
> // `route` will be implicitly converted to `Flow` using 
> `RouteResult.route2HandlerFlow`
> val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
> println("Server online at http://127.0.0.1:3000/;)
>   }
> }
>
>
> *Node*
>
> var http = require('http');
>
> let userCount = 0;
> var server = http.createServer(function (request, response) {
> userCount++;
> response.writeHead(200, {"Content-Type": "application/json"});
> const hello = {msg: "Hello world", userCount: userCount};
> response.end(JSON.stringify(hello));
> });
>
> server.listen(3000);
>
> console.log("Server running at http://127.0.0.1:3000/;);
>
> (to be more exact there's also some wrapping code because I'm running this in 
> a cluster so all cores can be utilized)
>
>
> So for the first test, things are pretty much the same - Akka HTTP uses
> less CPU (4-6% vs. 10% in Node) and has a slightly lower average response
> time, but a higher max response time.
>
> Not very interesting.
>
>
> The second test was more one sided though.
>
>
> The Node version maxed out the CPU and got the following results:
>
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
>   Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 5.31s 4.48s   16.66s65.79%
> Req/Sec 9.70k 0.87k   10.86k57.85%
>   5806492 requests in 5.00m, 1.01GB read
> Requests/sec:  19354.95
> Transfer/sec:  3.43MB
>
>
> Whereas for the Akka HTTP version I saw each core using ~40% CPU
> throughout the test and I had the following results:
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>   Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
>   Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency 1.83ms1.27ms  78.91ms   95.96%
> Req/Sec10.55k 1.79k   28.22k75.98%
>   5997552 requests in 5.00m, 1.00GB read
> Requests/sec:  19991.72
> Transfer/sec:  3.41MB
>
>
> Which is not a huge increase over 2K requests/sec:
>
>
> Running 5m test @ http://srv-02:3000/
>   2 threads and 100 connections
>