Hi,

I'd just like to share my satisfaction from Akka HTTP performance in 2.4.10.
I'm diagnosing some low level Node.js performance issues and while running 
various tests that only require the most basic "Hello World" style code, I 
decided to take a few minutes to check how would Akka HTTP handle the same 
work.
I was quite impressed with the results, so I thought I'd share.

I'm running two c4.large instances (so two cores on each instance) - one 
running the HTTP service and another running wrk2.
I've tested only two short sets (seeing as I have other work to do):

   1. use 2 threads to simulate 100 concurrent users pushing 2k 
   requests/sec for 5 minutes
   2. use 2 threads to simulate 100 concurrent users pushing 20k 
   requests/sec for 5 minutes

In both cases, the tests are actually executed twice without a restart in 
between and I throw away the results of the first run.

The first run is just to get JIT and other adaptive mechanisms to do their 
thing.

5 minutes seems to be enough based on the CPU behavior I see, but for a 
more "official" test I'd probably use something longer.


As for the code, I was using vanilla Node code - the kind you see as the 
most basic example (no web frameworks or anything) but for Akka, I used the 
high level DSL.


Here's the Code:


*Akka HTTP*


package com.example.rest

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives._
import akka.stream.ActorMaterializer


case class Reply(message: String = "Hello World", userCount: Int)

object MyJsonProtocol
  extends akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
    with spray.json.DefaultJsonProtocol {

  implicit val replyFormat = jsonFormat2(Reply.apply)
}

object FullWebServer {
  var userCount = 0;

  def getReply() = {
    userCount += 1
    Reply(userCount=userCount)
  }

  def main(args: Array[String]) {
    implicit val system = ActorSystem()
    implicit val materializer = ActorMaterializer()
    import MyJsonProtocol._

    val route =
      get {
        complete(getReply())
      }

    // `route` will be implicitly converted to `Flow` using 
`RouteResult.route2HandlerFlow`
    val bindingFuture = Http().bindAndHandle(route, "0.0.0.0", 3000)
    println("Server online at http://127.0.0.1:3000/";)
  }
}


*Node*

var http = require('http');

let userCount = 0;
var server = http.createServer(function (request, response) {
    userCount++;
    response.writeHead(200, {"Content-Type": "application/json"});
    const hello = {msg: "Hello world", userCount: userCount};
    response.end(JSON.stringify(hello));
});

server.listen(3000);

console.log("Server running at http://127.0.0.1:3000/";);

(to be more exact there's also some wrapping code because I'm running this in a 
cluster so all cores can be utilized)


So for the first test, things are pretty much the same - Akka HTTP uses 
less CPU (4-6% vs. 10% in Node) and has a slightly lower average response 
time, but a higher max response time.

Not very interesting.


The second test was more one sided though.


The Node version maxed out the CPU and got the following results:


Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 215.794ms, rate sampling interval: 1623ms
  Thread calibration: mean lat.: 366.732ms, rate sampling interval: 1959ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.31s     4.48s   16.66s    65.79%
    Req/Sec     9.70k     0.87k   10.86k    57.85%
  5806492 requests in 5.00m, 1.01GB read
Requests/sec:  19354.95
Transfer/sec:      3.43MB


Whereas for the Akka HTTP version I saw each core using ~40% CPU throughout 
the test and I had the following results:

Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 5.044ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 5.308ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.83ms    1.27ms  78.91ms   95.96%
    Req/Sec    10.55k     1.79k   28.22k    75.98%
  5997552 requests in 5.00m, 1.00GB read
Requests/sec:  19991.72
Transfer/sec:      3.41MB


Which is not a huge increase over 2K requests/sec:


Running 5m test @ http://srv-02:3000/
  2 threads and 100 connections
  Thread calibration: mean lat.: 1.565ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.557ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.07ms  479.75us   8.09ms   62.57%
    Req/Sec     1.06k   131.65     1.78k    79.05%
  599804 requests in 5.00m, 101.77MB read
Requests/sec:   1999.33
Transfer/sec:    347.39KB



In summary, I know this is far from a conclusive test, but I was still 
quite excited to see the results.

Keep up the good work!

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to