[ 
https://issues.apache.org/jira/browse/FLINK-39137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Bube updated FLINK-39137:
---------------------------------
    Description: 
Kinesis {{GetRecords}} quota is easy to exhaust (5 calls/sec/shard limit). With 
multiple consumers on the same stream/shard, one eager reader can starve others.

*Proposed change:* Add a new Kinesis Streams source config option to throttle 
polling after successful (non-empty) fetches:
 * {{source.reader.nonempty-records-fetch-interval}}
 ** {{Duration type}}
 ** Disabled by default (preserves current behavior)

  was:
Kinesis {{GetRecords}} quota is easy to exhaust (5 calls/sec/shard limit). With 
multiple consumers on the same stream/shard, one eager reader can starve others.

*Proposed change:* Add a new Kinesis Streams source config option to throttle 
polling after successful (non-empty) fetches:
 * {{source.reader.nonempty-records-fetch-interval}}
 ** {{{}{}}}{{{}Duration{}}}
 ** Disabled by default (preserves current behavior)


> Add configurable delay between non-empty GetRecords calls in Kinesis source
> ---------------------------------------------------------------------------
>
>                 Key: FLINK-39137
>                 URL: https://issues.apache.org/jira/browse/FLINK-39137
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / AWS
>    Affects Versions: 1.20.3, 2.2.0, 2.1.1, 2.3.0
>            Reporter: Andreas Bube
>            Priority: Minor
>
> Kinesis {{GetRecords}} quota is easy to exhaust (5 calls/sec/shard limit). 
> With multiple consumers on the same stream/shard, one eager reader can starve 
> others.
> *Proposed change:* Add a new Kinesis Streams source config option to throttle 
> polling after successful (non-empty) fetches:
>  * {{source.reader.nonempty-records-fetch-interval}}
>  ** {{Duration type}}
>  ** Disabled by default (preserves current behavior)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to