Hi,

Thanks.

I see that the latency stabilizes over time. I ran word count topology with
3 worker nodes, latency stabilizes after 2 hours or so.

Is there any other way I can measure the end to end latency of a topology
other than "complete latency" of Storm UI?

Best,
Preethini




On Mon, Jul 17, 2017 at 5:41 AM, Ambud Sharma <[email protected]>
wrote:

> If I may add, it is also explained by the potential surge of tuples when
> topology starts which will eventually reach an equilibrium which the normal
> latency of your topology components.
>
> On Jul 14, 2017 4:29 AM, "preethini v" <[email protected]> wrote:
>
>> Hi,
>>
>> I am running WordCountTopology with 3 worker nodes. The parallelism of
>> spout, split and count is 5, 8 and 12 respectively. I have enabled acking
>> to measure the complete latency of the topology.
>>
>> I am considering  complete latency as a measure of end-to-end latency.
>>
>> The Complete latency is the time a tuple is emitted by a Spout until
>> Spout.ack() is called.  Thus, it is the time from tuple being emitted,
>> the tuple processing time, the time it spends in the internal input/output
>> buffers and until the ack for the tuple is received by the Spout.
>>
>> The stats from storm UI show that the complete latency for a topology
>> keeps decreasing with time.
>>
>> 1. Is this normal?
>> 2. If yes, What explains the continuous decreasing complete latency
>> value?
>> 3. Is complete latency a good measure of end-to-end latency of a topology?
>>
>> Thanks,
>> Preethini
>>
>

Reply via email to