[ 
https://issues.apache.org/jira/browse/FLINK-35541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh updated FLINK-35541:
------------------------------------
    Description: 
Currently if the record write operation in the sink consistently fails with 
retriable error, sinks will retry indefinitely. In case when cause of the error 
is not resolved this may lead to poison pill.

 

Proposal here is to add a configurable retry limit for each record. Users can 
specify a maximum retry per record, and the sink will fail once the retry limit 
is reached.

 

We will implement this for all AWS connectors:
 * DDBSink
 * Firehose Sink
 * Kinesis Sink

 

  was:Currently if the record write operation in the sink consistently fails 
with retriable error, sinks will retry indefinitely. In case when cause of the 
error is not resolved this may lead to stuck operator.


> Introduce retry limiting for AWS connector sinks
> ------------------------------------------------
>
>                 Key: FLINK-35541
>                 URL: https://issues.apache.org/jira/browse/FLINK-35541
>             Project: Flink
>          Issue Type: Technical Debt
>          Components: Connectors / AWS, Connectors / DynamoDB, Connectors / 
> Firehose, Connectors / Kinesis
>    Affects Versions: aws-connector-4.2.0
>            Reporter: Aleksandr Pilipenko
>            Priority: Major
>
> Currently if the record write operation in the sink consistently fails with 
> retriable error, sinks will retry indefinitely. In case when cause of the 
> error is not resolved this may lead to poison pill.
>  
> Proposal here is to add a configurable retry limit for each record. Users can 
> specify a maximum retry per record, and the sink will fail once the retry 
> limit is reached.
>  
> We will implement this for all AWS connectors:
>  * DDBSink
>  * Firehose Sink
>  * Kinesis Sink
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to