[ https://issues.apache.org/jira/browse/FLINK-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143520#comment-15143520 ]
Stephan Ewen commented on FLINK-3386: ------------------------------------- I don't get the duplicate status. This issue is about expiry while reading, the other issue about load shedding. Load shedding may in most cases prevent this issue from happening, but if expiry of data happens, this issue is still relevant. > Kafka consumers should not necessarily fail on expiring data > ------------------------------------------------------------ > > Key: FLINK-3386 > URL: https://issues.apache.org/jira/browse/FLINK-3386 > Project: Flink > Issue Type: Improvement > Components: Kafka Connector, Streaming Connectors > Affects Versions: 1.0.0 > Reporter: Gyula Fora > > Currently if the data in a kafka topic expires while reading from it, it > causes an unrecoverable failure as subsequent retries will also fail on > invalid offsets. > While this might be the desired behaviour under some circumstances, it would > probably be better in most cases to automatically jump to the earliest valid > offset in these cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)