glaksh100 commented on a change in pull request #6408: [FLINK-9897][Kinesis 
Connector] Make adaptive reads depend on run loop time instead of 
fetchintervalmillis
URL: https://github.com/apache/flink/pull/6408#discussion_r206363100
 
 

 ##########
 File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ##########
 @@ -233,26 +225,69 @@ public void run() {
                                                
subscribedShard.getShard().getHashKeyRange().getEndingHashKey());
 
                                        long recordBatchSizeBytes = 0L;
-                                       long averageRecordSizeBytes = 0L;
-
                                        for (UserRecord record : 
fetchedRecords) {
                                                recordBatchSizeBytes += 
record.getData().remaining();
                                                
deserializeRecordForCollectionAndUpdateState(record);
                                        }
 
-                                       if (useAdaptiveReads && 
!fetchedRecords.isEmpty()) {
-                                               averageRecordSizeBytes = 
recordBatchSizeBytes / fetchedRecords.size();
-                                               maxNumberOfRecordsPerFetch = 
getAdaptiveMaxRecordsPerFetch(averageRecordSizeBytes);
-                                       }
-
                                        nextShardItr = 
getRecordsResult.getNextShardIterator();
+
+                                       long processingEndTimeNanos = 
System.nanoTime();
 
 Review comment:
   Removed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to