Github user tony810430 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3001#discussion_r112132994
--- Diff:
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumer.java
---
@@ -194,26 +216,30 @@ public void run(SourceContext<T> sourceContext)
throws Exception {
// all subtasks will run a fetcher, regardless of whether or
not the subtask will initially have
// shards to subscribe to; fetchers will continuously poll for
changes in the shard list, so all subtasks
// can potentially have new shards to subscribe to later on
- fetcher = new KinesisDataFetcher<>(
- streams, sourceContext, getRuntimeContext(),
configProps, deserializer);
+ fetcher = createFetcher(streams, sourceContext,
getRuntimeContext(), configProps, deserializer);
boolean isRestoringFromFailure = (sequenceNumsToRestore !=
null);
fetcher.setIsRestoringFromFailure(isRestoringFromFailure);
// if we are restoring from a checkpoint, we iterate over the
restored
// state and accordingly seed the fetcher with subscribed
shards states
if (isRestoringFromFailure) {
--- End diff --
@tzulitai
Because the behavior in `KinesisDataFetcher.discoverNewShardsToSubscribe()`
will always discover new shards from the latest discovered shard id, that may
cause the problem if some subtasks miss to discover new shards before rescaling
and new subtasks also miss to discover those shards.
You can get the concrete example from comment for
`FlinkKinesisConsumerTest.testFetcherShouldBeCorrectlySeededWithNewDiscoveredKinesisStreamShard()`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---