Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r127636725
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -505,6 +519,21 @@ public void run(SourceContext<T> sourceContext) throws
Exception {
throw new Exception("The partitions were not set for
the consumer");
}
+ // initialize commit metrics and default offset callback method
+ this.successfulCommits =
this.getRuntimeContext().getMetricGroup().counter("commitsSucceeded");
+ this.failedCommits =
this.getRuntimeContext().getMetricGroup().counter("commitsFailed");
+
+ this.offsetCommitCallback = new KafkaCommitCallback() {
+ @Override
+ public void onComplete(Exception exception) {
+ if (exception == null) {
+ successfulCommits.inc();
--- End diff --
I would also like to raise a thread-safety issue here.
Currently, since there's always only one pending offset commit in Kafka
09+, and Kafka08 commits in a blocking call, there will be no race condition in
incrementing the counters. However, changing these implementations in
subclasses (perhaps in the future) can easily introduce race conditions here.
At the very least, we probably should add a notice about thread-safety
contract in the Javadoc of `commitInternalOffsetsToKafka`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---