[
https://issues.apache.org/jira/browse/STORM-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14004979#comment-14004979
]
ASF GitHub Bot commented on STORM-297:
--------------------------------------
Github user revans2 commented on the pull request:
https://github.com/apache/incubator-storm/pull/103#issuecomment-43791269
Yes getting the best performance of a topology really depends on the
resources that your topology is using. If your topology is CPU bound then you
want to spread it out so that you have enough cores to handle the parallelism,
but if your topology is I/O bound you want to collocate them as much as
possible. The best performance optimization is simply to stop doing something.
So if you can cut out the serialization/deserialization and sending tuples to
another process, even over the loopback device, then that potentially becomes a
big win.
The really difficult part is that parts of your topology may be CPU bound,
other parts may be I/O bound, and other parts may be constrained by memory
(which has it's own limitations). Also you may have a different definition of
"best". Some users may require a very low latency, and are willing to let most
of the cluster sit idle so that they know when something happens they can
process it very quickly. Other times you are willing to sacrifice latency to
be sure that everything you want to run fits on a smaller set of hardware.
> Storm Performance cannot be scaled up by adding more CPU cores
> --------------------------------------------------------------
>
> Key: STORM-297
> URL: https://issues.apache.org/jira/browse/STORM-297
> Project: Apache Storm (Incubating)
> Issue Type: Bug
> Reporter: Sean Zhong
> Labels: Performance, netty
> Fix For: 0.9.2-incubating
>
> Attachments: Storm_performance_fix.pdf,
> storm_Netty_receiver_diagram.png, storm_performance_fix.patch
>
>
> We cannot scale up the performance by adding more CPU cores and increasing
> parallelism.
> For a 2 layer topology Spout ---shuffle grouping--> bolt, when message size
> is small (around 100 bytes), we can find in the below picture that neither
> the CPU nor the network is saturated. When message size is 100 bytes, only
> 40% of CPU is used, only 18% of network is used, although we have a high
> parallelism (overall we have 144 executors)
--
This message was sent by Atlassian JIRA
(v6.2#6252)