rajagopr commented on code in PR #11769:
URL: https://github.com/apache/pinot/pull/11769#discussion_r1353051133


##########
pinot-plugins/pinot-stream-ingestion/pinot-kafka-2.0/src/main/java/org/apache/pinot/plugin/stream/kafka20/KafkaStreamMetadataProvider.java:
##########
@@ -57,7 +60,11 @@ public KafkaStreamMetadataProvider(String clientId, 
StreamConfig streamConfig, i
   @Override
   public int fetchPartitionCount(long timeoutMillis) {
     try {
-      return _consumer.partitionsFor(_topic, 
Duration.ofMillis(timeoutMillis)).size();
+      List<PartitionInfo> partitionInfos = _consumer.partitionsFor(_topic, 
Duration.ofMillis(timeoutMillis));
+      if (CollectionUtils.isNotEmpty(partitionInfos)) {
+        return partitionInfos.size();
+      }
+      throw new RuntimeException(String.format("Failed to fetch partition 
information for topic: %s", _topic));

Review Comment:
   @snleee – We got an NPE and did not get `PermanentConsumerException` or 
`TransientConsumerException`. Double checked on this. We have the following 
logs before the NPE.
   
   ```
   2023/10/06 16:14:58.369 INFO [KafkaConsumer] [grizzly-http-server-7] 
[Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Subscribed to 
   partition(s): production_volumes--2147483648
   2023/10/06 16:14:58.458 WARN [NetworkClient] [grizzly-http-server-7] 
[Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Error while fe
   tching metadata with correlation id 2 : 
{production_volumes=UNKNOWN_TOPIC_OR_PARTITION}
   2023/10/06 16:14:58.458 INFO [Metadata] [grizzly-http-server-7] [Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Cluster ID: fhT-tgQ
   yTWWpcu7vhp-0hw
   ```



##########
pinot-plugins/pinot-stream-ingestion/pinot-kafka-2.0/src/main/java/org/apache/pinot/plugin/stream/kafka20/KafkaStreamMetadataProvider.java:
##########
@@ -57,7 +60,11 @@ public KafkaStreamMetadataProvider(String clientId, 
StreamConfig streamConfig, i
   @Override
   public int fetchPartitionCount(long timeoutMillis) {
     try {
-      return _consumer.partitionsFor(_topic, 
Duration.ofMillis(timeoutMillis)).size();
+      List<PartitionInfo> partitionInfos = _consumer.partitionsFor(_topic, 
Duration.ofMillis(timeoutMillis));
+      if (CollectionUtils.isNotEmpty(partitionInfos)) {
+        return partitionInfos.size();
+      }
+      throw new RuntimeException(String.format("Failed to fetch partition 
information for topic: %s", _topic));

Review Comment:
   @snleee – We got an NPE and did not get `PermanentConsumerException` or 
`TransientConsumerException`. Double checked on this. We have the following 
logs before the NPE.
   
   ```
   2023/10/06 16:14:58.369 INFO [KafkaConsumer] [grizzly-http-server-7] 
[Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Subscribed to 
   partition(s): production_volumes--2147483648
   2023/10/06 16:14:58.458 WARN [NetworkClient] [grizzly-http-server-7] 
[Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Error while fe
   tching metadata with correlation id 2 : 
{production_volumes=UNKNOWN_TOPIC_OR_PARTITION}
   2023/10/06 16:14:58.458 INFO [Metadata] [grizzly-http-server-7] [Consumer 
clientId=PartitionGroupMetadataFetcher-pv_daily_production_wells_benchmark_v1_REALTIME-production_volumes,
 groupId=startreetest-new] Cluster ID: fhT-tgQ
   yTWWpcu7vhp-0hw
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to