[
https://issues.apache.org/jira/browse/STORM-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003660#comment-14003660
]
ASF GitHub Bot commented on STORM-297:
--------------------------------------
Github user clockfly commented on the pull request:
https://github.com/apache/incubator-storm/pull/103#issuecomment-43656390
Gvain,
> Besides, By using a SHARED threadpool(its default size is 1) among all
netty client within a worker, the netty threads number do not increase as
total worker numbers increase. Check [jira][storm-12]. So, increasing worker
count may not cause netty context switching problem.
Context switch here means netty threads from different worker processes of
same machine will compete with each other.
> "3. More outbound acker message count. Usually we will allocate one acker
to one worker."
But you allocate 48 ackers to only 4 workers.
Usually one acker one worker will suffice. But for the perfomance
benchmarking case, acker becomes a bottleneck, because the message count is
huge.
> Storm Performance cannot be scaled up by adding more CPU cores
> --------------------------------------------------------------
>
> Key: STORM-297
> URL: https://issues.apache.org/jira/browse/STORM-297
> Project: Apache Storm (Incubating)
> Issue Type: Bug
> Reporter: Sean Zhong
> Labels: Performance, netty
> Fix For: 0.9.2-incubating
>
> Attachments: Storm_performance_fix.pdf,
> storm_Netty_receiver_diagram.png, storm_performance_fix.patch
>
>
> We cannot scale up the performance by adding more CPU cores and increasing
> parallelism.
> For a 2 layer topology Spout ---shuffle grouping--> bolt, when message size
> is small (around 100 bytes), we can find in the below picture that neither
> the CPU nor the network is saturated. When message size is 100 bytes, only
> 40% of CPU is used, only 18% of network is used, although we have a high
> parallelism (overall we have 144 executors)
--
This message was sent by Atlassian JIRA
(v6.2#6252)