Github user gliu6 commented on a diff in the pull request:
https://github.com/apache/flink/pull/6021#discussion_r196940135
--- Diff:
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
---
@@ -326,6 +366,29 @@ private void checkAndPropagateAsyncError() throws
Exception {
}
}
+ /**
+ * If the internal queue of the {@link KinesisProducer} gets too long,
+ * flush some of the records until we are below the limit again.
+ * We don't want to flush _all_ records at this point since that would
+ * break record aggregation.
+ */
+ private void enforceQueueLimit() {
--- End diff --
I wonder whether we could adjust the queue limit dynamically.
you mentioned that `queue limit = (number of shards * queue size per shard)
/ record size`.
except record size, all others are relatively easy to set. For me, I don't
really know the record size until the application starts. Also, what is the
record size varies over time?
So how about add a queueLimit supplier function here to allow user to
supply how the queueLimit is calculated dynamically?
---