Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/2016#discussion_r64257599
--- Diff:
flink-streaming-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
---
@@ -160,12 +168,13 @@ public void
setCustomPartitioner(KinesisPartitioner<OUT> partitioner) {
public void open(Configuration parameters) throws Exception {
super.open(parameters);
- KinesisProducerConfiguration config = new
KinesisProducerConfiguration();
- config.setRegion(this.region);
- config.setCredentialsProvider(new StaticCredentialsProvider(new
BasicAWSCredentials(this.accessKey, this.secretKey)));
+ KinesisProducerConfiguration producerConfig = new
KinesisProducerConfiguration();
+
+
producerConfig.setRegion(configProps.getProperty(KinesisConfigConstants.CONFIG_AWS_REGION));
+
producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
//config.setCollectionMaxCount(1);
//config.setAggregationMaxCount(1);
--- End diff --
Okay, nice.
One thing that would be really helpful would be additional testing. One big
issue is that the Kinesis connector doesn't handle "flow control" nicely.
If I have a Flink job that is producing data at a higher rate than the
number of shards permits, I'm getting a lot of failures. Ideally, the producer
should only accept as much data as it can handle and block otherwise.
Do you have any ideas how to achieve that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---