No worries, thanks Chris!
I think most feedback has been covered and the KIP is ready for vote. Will
be starting the vote thread soon.
Cheers,
Jorge.
On Mon, 5 Dec 2022 at 15:10, Chris Egerton wrote:
> Hi Jorge,
>
> Thanks for indulging my paranoia. LGTM!
>
> Cheers,
>
> Chris
>
> On Mon, Dec
Hi Jorge,
Thanks for indulging my paranoia. LGTM!
Cheers,
Chris
On Mon, Dec 5, 2022 at 10:06 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:
> Sure! I have a added the following to the proposed changes section:
>
> ```
> The per-record metrics will definitely be added to
Sure! I have a added the following to the proposed changes section:
```
The per-record metrics will definitely be added to Kafka Connect as part of
this KIP, but their metric level will be changed pending the performance
testing described in KAFKA-14441, and will otherwise only be exposed at
Hi Jorge,
Thanks for filing KAFKA-14441! In the ticket description we mention that
"there will be more confidence whether to design metrics to be exposed at a
DEBUG or INFO level depending on their impact" but it doesn't seem like
this is called out in the KIP and, just based on what's in the
Thanks for the reminder Chris!
I have added a note on the KIP to include this as part of the KIP as most
of the metrics proposed are per-record and having all on DEBUG would limit
the benefits, and created https://issues.apache.org/jira/browse/KAFKA-14441
to keep track of this task.
Cheers,
Hi Jorge,
Thanks! What were your thoughts on the possible benchmarking and/or
downgrading of per-record metrics to DEBUG?
Cheers,
Chris
On Thu, Nov 24, 2022 at 8:20 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:
> Thanks Chris! I have updated the KIP with "transform"
Thanks Chris! I have updated the KIP with "transform" instead of "alias".
Agree it's clearer.
Cheers,
Jorge.
On Mon, 21 Nov 2022 at 21:36, Chris Egerton wrote:
> Hi Jorge,
>
> Thanks for the updates, and apologies for the delay. The new diagram
> directly under the "Proposed Changes" section
Hi Jorge,
Thanks for the updates, and apologies for the delay. The new diagram
directly under the "Proposed Changes" section is absolutely gorgeous!
Follow-ups:
RE 2: Good point. We can use the same level for these metrics, it's not a
big deal.
RE 3: As long as all the per-record metrics are
Thanks Mickael!
On Wed, 9 Nov 2022 at 15:54, Mickael Maison
wrote:
> Hi Jorge,
>
> Thanks for the KIP, it is a nice improvement.
>
> 1) The per transformation metrics still have a question mark next to
> them in the KIP. Do you want to include them? If so we'll want to tag
> them, we should be
Hi Jorge,
Thanks for the KIP, it is a nice improvement.
1) The per transformation metrics still have a question mark next to
them in the KIP. Do you want to include them? If so we'll want to tag
them, we should be able to include the aliases in TransformationChain
and use them.
2) I see no
Thanks, Chris! Great feedback! Please, find my comments below:
On Thu, 13 Oct 2022 at 18:52, Chris Egerton wrote:
> Hi Jorge,
>
> Thanks for the KIP. I agree with the overall direction and think this would
> be a nice improvement to Kafka Connect. Here are my initial thoughts on the
> details:
Hi Jorge,
Thanks for the KIP. I agree with the overall direction and think this would
be a nice improvement to Kafka Connect. Here are my initial thoughts on the
details:
1. The motivation section outlines the gaps in Kafka Connect's task metrics
nicely. I think it'd be useful to include more
Hi everyone,
I've made a slight addition to the KIP based on Yash feedback:
- A new metric is added at INFO level to record the max latency from the
batch timestamp, by keeping the oldest record timestamp per batch.
- A draft implementation is linked.
Looking forward to your feedback.
Also, a
Great. I have updated the KIP to reflect this.
Cheers,
Jorge.
On Thu, 8 Sept 2022 at 12:26, Yash Mayya wrote:
> Thanks, I think it makes sense to define these metrics at a DEBUG recording
> level.
>
> On Thu, Sep 8, 2022 at 2:51 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com>
Thanks, I think it makes sense to define these metrics at a DEBUG recording
level.
On Thu, Sep 8, 2022 at 2:51 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:
> On Thu, 8 Sept 2022 at 05:55, Yash Mayya wrote:
>
> > Hi Jorge,
> >
> > Thanks for the changes. With regard to
On Thu, 8 Sept 2022 at 05:55, Yash Mayya wrote:
> Hi Jorge,
>
> Thanks for the changes. With regard to having per batch vs per record
> metrics, the additional overhead I was referring to wasn't about whether or
> not we would need to iterate over all the records in a batch. I was
> referring to
Hi Jorge,
Thanks for the changes. With regard to having per batch vs per record
metrics, the additional overhead I was referring to wasn't about whether or
not we would need to iterate over all the records in a batch. I was
referring to the potential additional overhead caused by the higher
Hi Sagar and Yash,
> the way it's defined in
https://kafka.apache.org/documentation/#connect_monitoring for the metrics
4.1. Got it. Add it to the KIP.
> The only thing I would argue is do we need sink-record-latency-min? Maybe
we
> could remove this min metric as well and make all of the 3 e2e
On Sat, 3 Sep 2022 at 17:02, Yash Mayya wrote:
> Hi Jorge and Sagar,
>
> I think it makes sense to not have a min metric for either to remain
> consistent with the existing put-batch and poll-batch metrics (it doesn't
> seem particularly useful either anyway). Also, the new
>
Hi Jorge and Sagar,
I think it makes sense to not have a min metric for either to remain
consistent with the existing put-batch and poll-batch metrics (it doesn't
seem particularly useful either anyway). Also, the new
"sink-record-latency" metric name looks fine to me, thanks for making the
Hi Jorge,
Thanks for the changes.
Regarding the metrics, I meant something like this:
kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
the way it's defined in
https://kafka.apache.org/documentation/#connect_monitoring for the metrics.
I see what you mean by the 3
Hi Sagar and Yash,
Thanks for your feedback!
> 1) I am assuming the new metrics would be task level metric.
1.1 Yes, it will be a task level metric, implemented on the
Worker[Source/Sink]Task.
> Could you specify the way it's done for other sink/source connector?
1.2. Not sure what do you
Hi Jorge,
Thanks for the KIP! I have the same confusion with the e2e-latency metrics
as Sagar above. "e2e" would seem to indicate the latency between when the
record was written to Kafka and when the record was written to the sink
system by the connector - however, as per the KIP it looks like it
Hi Jorge,
Thanks for the KIP. It looks like a very good addition. I skimmed through
once and had a couple of questions =>
1) I am assuming the new metrics would be task level metric. Could you
specify the way it's done for other sink/source connector?
2) I am slightly confused about the e2e
Hi all,
I'd like to start a discussion thread on KIP-864: Add End-To-End Latency
Metrics to Connectors.
This KIP aims to improve the metrics available on Source and Sink
Connectors to measure end-to-end latency, including source and sink record
conversion time, and sink record e2e latency
25 matches
Mail list logo