[ 
https://issues.apache.org/jira/browse/FLINK-9897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554727#comment-16554727
 ] 

ASF GitHub Bot commented on FLINK-9897:
---------------------------------------

GitHub user glaksh100 opened a pull request:

    https://github.com/apache/flink/pull/6408

    [FLINK-9897] Make adaptive reads depend on run loop time instead of 
fetchintervalmillis

    ## What is the purpose of the change
    [FLINK-9692](https://github.com/apache/flink/pull/6300) introduced the 
feature of adapting `maxNumberOfRecordsPerFetch` based on the average size of 
Kinesis records. The PR assumed a maximum of `1/fetchIntervalMillis` 
reads/second. However, in the case that the run loop of the `ShardConsumer` 
takes more than `fetchIntervalMillis` to process records, the 
`maxNumberOfRecordsPerFetch` is still sub-optimal. The purpose of this change 
is to make the adaptive reads more efficient by using the actual run loop 
frequency to determine the number of reads/second and 
`maxNumberOfRecordsPerFetch`. The change also re-factors the run loop to be 
more modular.
    
    
    ## Brief change log
    
      - `processingStartTimeNanos` records start time of loop
      -  `processingEndTimeNanos` records end time of loop
      -  `adjustRunLoopFrequency()` adjusts end time depending on 
`sleepTimeMillis` (if any).
      -  `runLoopTimeNanos` records actual run loop time.
      -  `adaptRecordsToRead` calculates `maxNumberOfRecordsPerFetch` based on 
`runLoopTimeNanos`
      - Unused method `getAdaptiveMaxRecordsPerFetch` is removed.
    
    ## Verifying this change
    
    This change is already covered by existing tests, such as 
`ShardConsumerTest`
    This has also been tested against a stream with the following configuration
    ```
    Number of Shards: 512
    Parallelism: 128
    ```
    
    ## Does this pull request potentially affect one of the following parts:
    
      - Dependencies (does it add or upgrade a dependency): (yes / **no**)
      - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
      - The serializers: (yes / **no** / don't know)
      - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
      - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
      - The S3 file system connector: (yes / **no** / don't know)
    
    ## Documentation
    
      - Does this pull request introduce a new feature? (yes / **no**)
      - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/lyft/flink FLINK-9897.AdaptiveReadsRunLoop

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/6408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #6408
    
----
commit 786556b9a9a509051a14772fbbd282db73e65252
Author: Lakshmi Gururaja Rao <glaksh100@...>
Date:   2018-07-24T18:44:08Z

    [FLINK-9897] Make adaptive reads depend on run loop time instead of fetch 
interval millis

----


> Further enhance adaptiveReads in Kinesis Connector to read more records in 
> the case of long running loops
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-9897
>                 URL: https://issues.apache.org/jira/browse/FLINK-9897
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kinesis Connector
>            Reporter: Lakshmi Rao
>            Priority: Major
>              Labels: pull-request-available
>
> In FLINK-9692, we introduced the ability for the shardConsumer to adaptively 
> read more records based on the current average record size to optimize the 2 
> Mb/sec shard limit. The feature maximizes  maxNumberOfRecordsPerFetch of 5 
> reads/sec (as prescribed by Kinesis limits). In the case where applications 
> take more time to process records in the run loop, they are no longer able to 
> read at a frequency of 5 reads/sec (even though their fetchIntervalMillis 
> maybe set to 200 ms). In such a scenario, the maxNumberOfRecordsPerFetch 
> should be calculated based on the time that the run loop actually takes as 
> opposed to fetchIntervalMillis. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to