dannycranmer commented on a change in pull request #12881:
URL: https://github.com/apache/flink/pull/12881#discussion_r468863930



##########
File path: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/publisher/RecordPublisher.java
##########
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.kinesis.internals.publisher;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.streaming.connectors.kinesis.model.StartingPosition;
+
+import java.util.function.Consumer;
+
+/**
+ * A {@code RecordPublisher} will consume records from an external stream and 
deliver them to the registered subscriber.
+ */
+@Internal
+public interface RecordPublisher {
+
+       /**
+        * Run the record publisher. Records will be consumed from the stream 
and published to the consumer.
+        * The number of batches retrieved by a single invocation will vary 
based on implementation.
+        *
+        * @param startingPosition the position in the stream from which to 
consume
+        * @param recordConsumer the record consumer in which to output records
+        * @return a status enum to represent whether a shard has been fully 
consumed
+        * @throws InterruptedException
+        */
+       RecordPublisherRunResult run(StartingPosition startingPosition, 
Consumer<RecordBatch> recordConsumer) throws InterruptedException;

Review comment:
       I need to double check this, I will follow up tomorrow. Essentially it 
was originally like this for EFO. As the EFO subscription passes records to the 
consumer in a callback it is still in the network thread. This means 
`ReadTimeout` can occur and the consumer could fail mid batch (If you apply 
very large backpressure). The `FanOutRecordPublisher` would then not know where 
to start consumption from, therefore the `ShardConsumer` was passing that in. 
However, I ended up splitting the network and shard consumer thread with a 
`BlockingQueue` in the end, so I can probably track the state in the 
`RecordPublisher` as you suggest.
   
   I will investigate and get back to you.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to