[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16355288#comment-16355288
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user tzulitai closed the pull request at:

https://github.com/apache/flink/pull/5336


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-02-06 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354377#comment-16354377
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8419:


Merged.

1.5 - 40f26c80476b884a6b7f3431562a78018fa014c5
1.4 - 432273aa963f46dc032c5007389a1a681b5188e6

> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354358#comment-16354358
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5335


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346783#comment-16346783
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5335#discussion_r165045598
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -95,21 +95,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup(KAFKA_CONSUMER_METRICS_GROUP);
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

Will address this as discussed in #5336, and then merge this.


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346636#comment-16346636
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165023129
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

for 1.4 I would just remove the TODO since we won't fix it, but for 1.5 I 
would as you suggested register them twice.


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346639#comment-16346639
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165023649
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

👌 makes sense, I'll do that and merge this.
Thanks for the review!


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346634#comment-16346634
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5335#discussion_r165022709
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -95,21 +95,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup(KAFKA_CONSUMER_METRICS_GROUP);
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

so why aren't we passing the consumerMetricGroup here?


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346633#comment-16346633
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165022738
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

Because that would break compatibility of previous metrics.

On the other hand, it might make sense that we also additionally register 
the Kafka-shipped metrics under `consumerMetricGroup` now, if we want to be 
able to eventually resolve this TODO in the future. What do you think?


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346624#comment-16346624
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165021513
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

so why aren't we passing the consumerMetricGroup here?


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346623#comment-16346623
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165021339
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java
 ---
@@ -560,16 +585,11 @@ private void updateMinPunctuatedWatermark(Watermark 
nextWatermark) {
 
/**
 * Add current and committed offsets to metric group.
-*
-* @param metricGroup The metric group to use
 */
-   protected void addOffsetStateGauge(MetricGroup metricGroup) {
-   // add current offsets to gage
-   MetricGroup currentOffsets = 
metricGroup.addGroup("current-offsets");
-   MetricGroup committedOffsets = 
metricGroup.addGroup("committed-offsets");
-   for (KafkaTopicPartitionState ktp : 
subscribedPartitionStates) {
-   currentOffsets.gauge(ktp.getTopic() + "-" + 
ktp.getPartition(), new OffsetGauge(ktp, OffsetGaugeType.CURRENT_OFFSET));
-   committedOffsets.gauge(ktp.getTopic() + "-" + 
ktp.getPartition(), new OffsetGauge(ktp, OffsetGaugeType.COMMITTED_OFFSET));
+   protected void 
registerOffsetMetrics(List> 
partitionOffsetStates) {
--- End diff --

make private?


> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334420#comment-16334420
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/5336

(release-1.4) [FLINK-8419] [kafka] Register metrics for dynamically 
discovered Kafka partitions

## What is the purpose of the change

Different version of #5335, which is targeted for `release-1.4`.
This version does not include the new offset metrics added in #5214 for the 
new partitions (those new metrics are only added in `master`).

## Brief change log

- db39ec2: Preliminary cleanup of the registration of the `KafkaConsumer` 
user scope metric group. This commit refactors that for better separation of 
concerns and lessen code duplication.
- 1acfc15: Register offset metrics for new partitions in 
`addDiscoveredPartitions`

## Verifying this change

No new tests were added.
By manually running a Flink job using the Kafka consumer and repartitioning 
the Kafka topic, you should be able to see metrics for the newly added 
partitions.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8419-1.4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5336


commit db39ec2ed6f6b9cf46bc851125b859501d9a9df0
Author: Tzu-Li (Gordon) Tai 
Date:   2018-01-22T12:36:20Z

[FLINK-8419] [kafka] Move consumer metric group registration to 
FlinkKafkaConsumerBase

This commit is a refactor to move the registration of the consumer
metric group (user scope "KafkaConsumer") to FlinkKafkaConsumerBase.
Previously, the registration was scattered around in Kafka
version-specific subclasses.

commit 1acfc1526e56abd5e5cb4f5a32641afbf2282905
Author: Tzu-Li (Gordon) Tai 
Date:   2018-01-22T13:14:44Z

[FLINK-8419] [kafka] Register metrics for dynamically discovered Kafka 
partitions




> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8419) Kafka consumer's offset metrics are not registered for dynamically discovered partitions

2018-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334385#comment-16334385
 ] 

ASF GitHub Bot commented on FLINK-8419:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/5335

(master) [FLINK-8419] [kafka] Register metrics for dynamically discovered 
Kafka partitions

## What is the purpose of the change

This PR fixes that offset metrics (i.e. current offset, committed offsets) 
were not registered for partitions that were dynamically discovered in the 
`FlinkKafkaConsumerBase`.

This version is targeted for merge to `master`.
Another version targeted for `release-1.4`, which does not include the new 
offset metrics added in #5214, will be separately opened.

## Brief change log

- 54f3cfd: Preliminary cleanup of the registration of the `KafkaConsumer` 
user scope metric group. This commit refactors that for better separation of 
concerns and lessen code duplication.
- bf1e4ce: Register offset metrics for new partitions in 
`addDiscoveredPartitions`
- 7e90a75: Minor hotfix to inappropriate access modifiers in the 
`AbstractFetcher`

## Verifying this change

No new tests were added.
By manually running a Flink job using the Kafka consumer and repartitioning 
the Kafka topic, you should be able to see metrics for the newly added 
partitions.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8419

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5335.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5335


commit 54f3cfd3eb9f925266de37219ab56703a26795d6
Author: Tzu-Li (Gordon) Tai 
Date:   2018-01-22T12:36:20Z

[FLINK-8419] [kafka] Move consumer metric group registration to 
FlinkKafkaConsumerBase

This commit is a refactor to move the registration of the consumer
metric group (user scope "KafkaConsumer") to FlinkKafkaConsumerBase.
Previously, the registration was scattered around in Kafka
version-specific subclasses.

commit bf1e4ce73279a90d625cda66ee84c194d5ce3e34
Author: Tzu-Li (Gordon) Tai 
Date:   2018-01-22T13:14:44Z

[FLINK-8419] [kafka] Register metrics for dynamically discovered Kafka 
partitions

commit 7e90a753441b7a4af1dd400942d88fba7d0178df
Author: Tzu-Li (Gordon) Tai 
Date:   2018-01-22T13:17:54Z

[hotfix] [kafka] Fix inapproriate access modifiers in AbstractFetcher




> Kafka consumer's offset metrics are not registered for dynamically discovered 
> partitions
> 
>
> Key: FLINK-8419
> URL: https://issues.apache.org/jira/browse/FLINK-8419
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Currently, the per-partition offset metrics are registered via the 
> {{AbstractFetcher#addOffsetStateGauge}} method. That method is only ever 
> called for the initial startup partitions, and not for dynamically discovered 
> partitions.
> We should consider adding some unit tests to make sure that metrics are 
> properly registered for all partitions. That would also safeguard us from 
> accidentally removing metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)