[ 
https://issues.apache.org/jira/browse/TS-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13878579#comment-13878579
 ] 

Leif Hedstrom edited comment on TS-2395 at 1/22/14 12:10 PM:
-------------------------------------------------------------

Can we get some more details on this? I've benchmarked on my home box at about 
25,000 QPS without keep-alive, and 100,000 QPS with keep-alive. I have seen 
some very odd behavior on a different box which has multiple NUMA nodes (and 
running an older kernel, CentOS6.4). But it shows up as what looks like severe 
slowness on some (small) number of the request. Typically this slowness seems 
to come from the connection taking a long time to connect.


was (Author: zwoop):
Can we get some more details on this? I've benchmarked on my home box at about 
25,000 QPS without keep-alive, and 100,000 QPS with keep-alive. I have seen 
some very odd behavior on a different box which has multiple NUMA nodes (and 
running an older kernel, CentOS6.4). But it shows up as what looks like severe 
slowness on some (small) number of the request.

> Healthchecks plugin using 100% of CPU
> -------------------------------------
>
>                 Key: TS-2395
>                 URL: https://issues.apache.org/jira/browse/TS-2395
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Plugins
>            Reporter: David Carlin
>            Assignee: Leif Hedstrom
>             Fix For: 4.2.0
>
>
> I had configured the healthchecks plugin to serve four healthchecks - they 
> come in pretty frequently, it can be hundreds per second.  Use of the plugin 
> would peg every core to 100% in htop.  Response times for the healthchecks 
> were often around 4 seconds.
> The code from master was used to build the plugin.
> perf top output showed the following:
> 87.74%  [kernel]                 [k] _spin_lock



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to