Thanks, this looks similar. I'll do some work around.
On Thu, Feb 29, 2024 at 5:15 PM Aleksandr Pilipenko
wrote:
> Based on the stacktrace, this looks like an issue described here:
> https://issues.apache.org/jira/browse/FLINK-32964
> Is your configuration similar to the one described in the tic
Based on the stacktrace, this looks like an issue described here:
https://issues.apache.org/jira/browse/FLINK-32964
Is your configuration similar to the one described in the ticket? If so,
you can work around this issue by explicitly specifying the credentials
provider for connector, by doing so av
Sorry, I just attached a wrong file. Let me paste the error log:
java.lang.RuntimeException: Maximum retries exceeded for SubscribeToShard.
Failed 10 times.
at
org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.runWithBackoff(FanOutRecordPublisher.java:
Hi,
Could you please provide more information on the error you are observing?
Attached file does not have anything related to Kinesis or any errors.
Best,
Aleksandr
On Wed, 28 Feb 2024 at 02:28, Xiaolong Wang
wrote:
> Hi,
>
> I used the flink-connector-kinesis (4.0.2-1.18) to consume from Kine
Hi,
I used the flink-connector-kinesis (4.0.2-1.18) to consume from Kinesis.
The job can start but will fail within 1 hour. Detailed error log
is attached.
When I changed the version of the flink-connector-kinesis to `1.15.2` ,
everything settled.
Any idea to fix it ?
create table kafka_event_v1