Thanks Nathan for your answer,

But I`m afraid that you understand me wrong :  With increasing executors by
32x, each executor's throughput *increased* by 5x, but complete latency
dropped.

On Tue, May 19, 2015 at 5:16 PM, Nathan Leung <[email protected]> wrote:

> It depends on your application and the characteristics of the io. You
> increased executors by 32x and each executor's throughput dropped by 5x, so
> it makes sense that latency will drop.
> On May 19, 2015 9:54 AM, "Dima Dragan" <[email protected]> wrote:
>
>> Hi everyone,
>>
>> I have found a strange behavior in topology metrics.
>>
>> Let`s say, we have 1 node, 2-core machine. simple Storm topology
>> Spout A -> Bolt B -> Bolt C
>>
>> Bolt B splits message on 320 parts and  emits (shuffle grouping) each to
>> Bolt C. Also Bolts B and C make some read/write operations to db.
>>
>> Input flow is continuous and static.
>>
>> Based on logic, setting up a more higher number of executors for Bolt C
>> than number of cores should be useless (the bigger part of threads will be
>> sleeping).
>> It is confirmed by increasing execute and process latency.
>>
>> But I noticed that complete latency has started to decrease. And I do not
>> understand why.
>>
>> For example, stats for bolt C:
>>
>> ExecutorsProcess latency (ms)Complete latency (ms)25.599897.27646.3526.3
>> 6428.432345.454
>>
>> Is it side effect of IO bound tasks?
>>
>> Thanks in advance.
>>
>> --
>> Best regards,
>> Dmytro Dragan
>>
>>
>>


-- 
Best regards,
Dmytro Dragan

Reply via email to