Thanks for the KIP Sophie. Getting the E2E latency is important for understanding the bottleneck of the application.
A couple of questions and ideas: 1. Could you clarify the rational of picking 75, 99 and max percentiles? Normally I see cases where we use 50, 90 percentile as well in production systems. 2. The current latency being computed is cumulative, I.E if a record goes through A -> B -> C, then P(C) = T(B->C) + P(B) = T(B->C) + T(A->B) + T(A) and so on, where P() represents the captured latency, and T() represents the time for transiting the records between two nodes, including processing time. For monitoring purpose, maybe having T(B->C) and T(A->B) are more natural to view as "hop-to-hop latency", otherwise if there is a spike in T(A->B), both P(B) and P(C) are affected in the same time. In the same spirit, the E2E latency is meaningful only when the record exits from the sink as this marks the whole time this record spent inside the funnel. Do you think we could have separate treatment for sink nodes and other nodes, so that other nodes only count the time receiving the record from last hop? I'm not proposing a solution here, just want to discuss this alternative to see if it is reasonable. 3. As we are going to monitor late arrival records as well, they would create some really spiky graphs when the out-of-order records are interleaving with on time records. Should we also supply a smooth version of the latency metrics, or user should just take care of it by themself? 4. Regarding this new metrics, we haven't discussed its relation with our existing processing latency metrics, could you add some context on comparison and a simple `when to use which` tutorial for the best? Boyang On Tue, May 12, 2020 at 7:28 PM Sophie Blee-Goldman <sop...@confluent.io> wrote: > Hey all, > > I'd like to kick off discussion on KIP-613 which aims to add end-to-end > latency metrics to Streams. Please take a look: > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-613%3A+Add+end-to-end+latency+metrics+to+Streams > > Cheers, > Sophie >