Hi,
I’m glad that you have figured it out.
Unfortunately it’s almost impossible to mention in our documentation all of the
quirks of connectors that we are using, since it would more or less finally
come down to fully coping their documentation :( However I created a small PR
that mentions
Hi,
Thanks Piotr for your response.
I've further investigated the issue and found the root cause.
There are 2 possible ways to produce/consume records to/from Kinesis:
1. Using the Kinesis Data Streams service API directly
2. Using the KCL & KPL.
The FlinkKinesisProducer uses the AWS
Hi,
Have you tried to write the same records, with exactly the same configuration
to the Kinesis, but outside of Flink (with some standalone Java application)?
Piotrek
> On 24 May 2018, at 09:40, Rafi Aroch wrote:
>
> Hi,
>
> We're using Kinesis as our input & output
Hi,
We're using Kinesis as our input & output of a job and experiencing parsing
exception while reading from the output stream. All streams contain 1 shard
only.
While investigating the issue I noticed a weird behaviour where records get
a PartitionKey I did not assign and the record Data is