Emmy,

Below are some prelim results of unix socket 8888 vs tcp socket 8889.

This is even less scientific, as the differences between these two are
going to depend a lot on content type and size, so the small fixed content
I'm using is not very realistic.

The unix socket has slightly better latency and slightly better
throughput... but nothing too significant.   It will be interesting to test
with larger and/or more diverse payload, also for a load with many more
mostly idle connections, where scheduling delays will become more
important.     No time as yet to do such tests.

cheers



gregw@Tile440: ~
[2058] ab -n 100000 -c 100 -k http://localhost:8888/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8888

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   1.520 seconds
Complete requests:      100000
Failed requests:        0
Keep-Alive requests:    100000
Total transferred:      121700000 bytes
HTML transferred:       104500000 bytes
Requests per second:    65778.70 [#/sec] (mean)
Time per request:       1.520 [ms] (mean)
Time per request:       0.015 [ms] (mean, across all concurrent requests)
Transfer rate:          78176.44 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       9
Processing:     0    2   0.8      1      18
Waiting:        0    2   0.8      1      18
Total:          0    2   0.8      1      20
WARNING: The median and mean for the processing time are not within a
normal deviation
        These results are probably not that reliable.
WARNING: The median and mean for the waiting time are not within a normal
deviation
        These results are probably not that reliable.
WARNING: The median and mean for the total time are not within a normal
deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      2
  95%      3
  98%      3
  99%      4
 100%     20 (longest request)

gregw@Tile440: ~
[2059] ab -n 100000 -c 100 -k http://localhost:8889/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8889

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   1.585 seconds
Complete requests:      100000
Failed requests:        0
Keep-Alive requests:    100000
Total transferred:      121700000 bytes
HTML transferred:       104500000 bytes
Requests per second:    63103.63 [#/sec] (mean)
Time per request:       1.585 [ms] (mean)
Time per request:       0.016 [ms] (mean, across all concurrent requests)
Transfer rate:          74997.18 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0      10
Processing:     0    2   1.1      1      34
Waiting:        0    2   1.1      1      34
Total:          0    2   1.2      1      34

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      3
  95%      3
  98%      5
  99%      7
 100%     34 (longest request)




On 19 November 2015 at 23:27, Emmeran Seehuber <[email protected]> wrote:

> Hi Greg,
>
> would you mind adding numbers for using haproxy + jetty using TCP/IP
> instead of Unix sockets? AFAIR Unix sockets don't allow the usage of
> sendfile/splice on Linux and may therefor be slower than TCP/IP. HAProxy
> uses spicing when ever possible.
>
> Thanks.
>
> cu,
>   Emmy
>
> Am 19.11.2015 um 13:11 schrieb Greg Wilkins <[email protected]>:
>
> Simone
>
> I'm testing all 4 possibilities. The 30% slow down is from clear text
> direct to jetty verses clear text proxied to jetty.  No ssl. That makes
> sense to me as the proxy requires handling by 2 processes with the same cpu
> available.
>
> The 100% improvement is comparing direct ssl with proxied+offloaded ssl.
> It shows the ssl performance gains are more than enough to compensate for
> the costs of proxying.
> On 19 Nov 2015 8:17 pm, "Simone Bordet" <[email protected]> wrote:
>
>> Greg,
>>
>> On Thu, Nov 19, 2015 at 6:09 AM, Greg Wilkins <[email protected]> wrote:
>> >
>> >
>> > So here are some numbers using ab with keep alive option:
>> >
>> > HTTP :8080  98634.66 [#/sec] 117224.98 [Kbytes/sec]
>> > HTTP :8888  67073.40 [#/sec]  79715.16 [Kbytes/sec]
>> > HTTPS:8443  23622.46 [#/sec]  28074.74 [Kbytes/sec]
>> > HTTPS:8843  52365.51 [#/sec]  62235.18 [Kbytes/sec]
>>
>> Uhm.
>>
>> Proxying via HAProxy seem to slow down clear-text HTTP by 30%. That
>> seems *a lot* to me.
>>
>> Are you offloading TLS at HAProxy and then forwarding the clear-text
>> bytes to backend ?
>> So the TLS numbers are actually measuring the difference in TLS
>> implementations ?
>>
>> If you're not offloading TLS at HAProxy, then how come passing raw
>> bytes to the backend yields such a difference (lose 30% for clear-text
>> bytes, *gain* 100% for encrypted bytes) ?
>>
>> --
>> Simone Bordet
>> ----
>> http://cometd.org
>> http://webtide.com
>> Developer advice, training, services and support
>> from the Jetty & CometD experts.
>> _______________________________________________
>> jetty-users mailing list
>> [email protected]
>> To change your delivery options, retrieve your password, or unsubscribe
>> from this list, visit
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To change your delivery options, retrieve your password, or unsubscribe
> from this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users
>
>
> Mit freundlichen Grüßen aus Augsburg
>
> Emmeran Seehuber
> Dipl. Inf. (FH)
> Schrannenstraße 8
> 86150 Augsburg
> USt-IdNr.: DE266070804
>
>
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To change your delivery options, retrieve your password, or unsubscribe
> from this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users
>



-- 
Greg Wilkins <[email protected]> CTO http://webtide.com
_______________________________________________
jetty-users mailing list
[email protected]
To change your delivery options, retrieve your password, or unsubscribe from 
this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to