[ 
https://issues.apache.org/jira/browse/TS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13678775#comment-13678775
 ] 

Leif Hedstrom edited comment on TS-1405 at 6/8/13 5:10 PM:
-----------------------------------------------------------

Just to make sure I wasn't affected by the accept-thread (well, 
non-accept-thread) issue, I reran the tests, with accept-thread enabled. I 
still see similar poor performance with the last patch here:

Current master:
{code}
4794208 fetches on 4946 conns, 300 max parallel, 4.794208E+08 bytes, in 30 
seconds
100 mean bytes/fetch
159806.9 fetches/sec, 1.598069E+07 bytes/sec
msecs/connect: 0.536 mean, 3.346 max, 0.087 min
msecs/first-response: 1.670 mean, 247.579 max, 0.097 min
{code}

With time-wheel patch:
{code}
http_load  -parallel 60 -seconds 30 -keep_alive 1000 URL.small
3238265 fetches on 3354 conns, 300 max parallel, 3.238265E+08 bytes, in 30 
seconds
100 mean bytes/fetch
107942.2 fetches/sec, 1.079422E+07 bytes/sec
msecs/connect: 0.290 mean, 2.753 max, 0.076 min
msecs/first-response: 2.689 mean, 76.218 max, 0.084 min
{code}


I could probably deal with the fact that it's lower throughput, but 33% fewer 
requests handled, and still 60% higher latency? I'll try to investigate this 
further next week, I'd really like to see the scalability improvements, but not 
at this significant loss of throughput / performance for use cases with few 
connections.
                
      was (Author: zwoop):
    Just to make sure I wasn't affecting by the accept-thread (well, 
non-accept-thread) issue, I reran the tests, with accept-thread enabled. I 
still see similar poor performance with the last patch here:

Current master:
{code}
4794208 fetches on 4946 conns, 300 max parallel, 4.794208E+08 bytes, in 30 
seconds
100 mean bytes/fetch
159806.9 fetches/sec, 1.598069E+07 bytes/sec
msecs/connect: 0.536 mean, 3.346 max, 0.087 min
msecs/first-response: 1.670 mean, 247.579 max, 0.097 min
{code}

With time-wheel patch:
{code}
http_load  -parallel 60 -seconds 30 -keep_alive 1000 URL.small
3238265 fetches on 3354 conns, 300 max parallel, 3.238265E+08 bytes, in 30 
seconds
100 mean bytes/fetch
107942.2 fetches/sec, 1.079422E+07 bytes/sec
msecs/connect: 0.290 mean, 2.753 max, 0.076 min
msecs/first-response: 2.689 mean, 76.218 max, 0.084 min
{code}


I could probably deal with the fact that it's lower throughput, but 33% fewer 
requests handled, and still 60% higher latency? I'll try to investigate this 
further next week, I'd really like to see the scalability improvements, but not 
at this significant loss of throughput / performance for use cases with few 
connections.
                  
> apply time-wheel scheduler  about event system
> ----------------------------------------------
>
>                 Key: TS-1405
>                 URL: https://issues.apache.org/jira/browse/TS-1405
>             Project: Traffic Server
>          Issue Type: Improvement
>          Components: Core
>    Affects Versions: 3.2.0
>            Reporter: Bin Chen
>            Assignee: Bin Chen
>             Fix For: 3.3.5
>
>         Attachments: linux_time_wheel.patch, linux_time_wheel_v10jp.patch, 
> linux_time_wheel_v11jp.patch, linux_time_wheel_v2.patch, 
> linux_time_wheel_v3.patch, linux_time_wheel_v4.patch, 
> linux_time_wheel_v5.patch, linux_time_wheel_v6.patch, 
> linux_time_wheel_v7.patch, linux_time_wheel_v8.patch, 
> linux_time_wheel_v9jp.patch
>
>
> when have more and more event in event system scheduler, it's worse. This is 
> the reason why we use inactivecop to handler keepalive. the new scheduler is 
> time-wheel. It's have better time complexity(O(1))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to