[
https://issues.apache.org/jira/browse/STORM-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003418#comment-14003418
]
ASF GitHub Bot commented on STORM-297:
--------------------------------------
Github user Gvain commented on the pull request:
https://github.com/apache/incubator-storm/pull/103#issuecomment-43639415
"So in common practice, each worker will have a moderate size of executors,
neither too small, nor too big."
I agree on this. But what size is considered to be too big or too small.
Should 36 be too big? Only a few executors failed to heartbeat to nimbus, the
whole worker will reload.
Besides, By using a SHARED threadpool(its default size is 1) among all
netty client within a worker, the netty threads number do not increase as total
worker numbers increase. Check [jira][storm-12]. So, increasing worker count
may not cause netty context switching problem.
"3. More outbound acker message count. Usually we will allocate one acker
to one worker."
But you allocate 48 ackers to only 4 workers.
> Storm Performance cannot be scaled up by adding more CPU cores
> --------------------------------------------------------------
>
> Key: STORM-297
> URL: https://issues.apache.org/jira/browse/STORM-297
> Project: Apache Storm (Incubating)
> Issue Type: Bug
> Reporter: Sean Zhong
> Labels: Performance, netty
> Fix For: 0.9.2-incubating
>
> Attachments: Storm_performance_fix.pdf,
> storm_Netty_receiver_diagram.png, storm_performance_fix.patch
>
>
> We cannot scale up the performance by adding more CPU cores and increasing
> parallelism.
> For a 2 layer topology Spout ---shuffle grouping--> bolt, when message size
> is small (around 100 bytes), we can find in the below picture that neither
> the CPU nor the network is saturated. When message size is 100 bytes, only
> 40% of CPU is used, only 18% of network is used, although we have a high
> parallelism (overall we have 144 executors)
--
This message was sent by Atlassian JIRA
(v6.2#6252)