https://issues.apache.org/jira/browse/FLINK-10310?focusedCommentId=16618447=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16618447
On Tue, Sep 11, 2018 at 10:15 AM Jayant Ameta wrote:
> Hi Till,
> I've opened a JIRA issue:
>
Have you configured checkpointing in your job. If enabled, the job should
revert back to the last stored checkpoint in case of a failure and process
the failed record again.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Till,
I've opened a JIRA issue: https://issues.apache.org/jira/browse/FLINK-10310.
Can we discuss it?
Jayant Ameta
On Thu, Aug 30, 2018 at 4:35 PM Till Rohrmann wrote:
> Hi Jayant,
>
> afaik it is currently not possible to control how failures are handled in
> the Cassandra Sink. What
I have never used the Flink Cassandra Sink so this may or may not work, but
have you tried creating your own custom retry policy?
https://docs.datastax.com/en/developer/java-driver/3.4/manual/retries/
Returning an ignore when onWriteTimeout
--
Sent from:
Hi Jayant,
afaik it is currently not possible to control how failures are handled in
the Cassandra Sink. What would be the desired behaviour? The best thing is
to open a JIRA issue to discuss potential improvements.
Cheers,
Till
On Thu, Aug 30, 2018 at 12:15 PM Jayant Ameta wrote:
> Hi,
>
Hi,
During high volumes, cassandra sink fails with the following error:
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra
timeout during write query at consistency SERIAL (2 replica were required
but only 1 acknowledged the write)
Is there a way to configure the sink to