[
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17686787#comment-17686787
]
Leonid Ilyevsky commented on FLINK-30998:
-----------------------------------------
I now found that previously the required functionality was there in the
org.apache.flink.streaming.connectors.opensearch package, but it is all now
deprecated and not quite usable, because it indeed is missing other important
setter functions on the builder object.
So the issue is about moving the ActionRequestFailureHandler interface to the
current org.apache.flink.connector.opensearch.sink package and implementing the
related logic.
> Add optional exception handler to flink-connector-opensearch
> ------------------------------------------------------------
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Opensearch
> Affects Versions: 1.16.1
> Reporter: Leonid Ilyevsky
> Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346).
> This makes the Flink pipeline fail. There is no way to handle the exception
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is
> done in elasticsearch connector. This way the client code has a chance to
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using
> streams on Opensearch side, and we are setting our own document IDs.
> Sometimes these IDs are duplicated; we need to ignore this situation and
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the
> batch failed (even though most of the documents were indexed, only the ones
> with duplicated IDs were rejected), and the whole flink job fails.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)