For 20 spouts and even number of processing bolts 3 seems like an odd
number of workers. Also are you sure you're not bottlenecked by your
throughput measuring bolt?
On May 13, 2014 2:43 AM, "Lasantha Fernando" <[email protected]> wrote:

> Hi all,
>
> Is there any guide or hints on how to configure storm to scale better?
>
> I was running some tests with a custom scheduler and found that the
> throughput did not scale as expected. Any pointers on what I am doing wrong?
>
> Parallelism 2 4 8 16 Single Node (Avg) 166099 161539.5 193986 N/A Two
> Node (Avg) 160988 165563 174675.5 177624.5
> The topology is as follows.
>
> Spout (Generates events continuously) -> Processing Bolt -> Throughput
> Measurement Bolt
>
> Parallelism is varied for the processing bolt.
>
> Parallelism for spout and throughput measuring bolt is kept constant at 20
> and 1 respectively.
>
> Topology.NUM_WORKERS = 3
>
> Custom scheduler code is available at [1]. Topology code is available at
> [2]. Any pointers would be much appreciated.
>
> Thanks,
> Lasantha
>
> [1]
> https://github.com/sajithshn/storm-schedulers/blob/master/src/main/java/org/wso2/siddhi/storm/scheduler/RoundRobinStormScheduler.java
> [2]
> https://github.com/lasanthafdo/siddhi-storm/blob/master/src/main/java/org/wso2/siddhi/storm/StockDataTopology.java
>

Reply via email to