[
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687502#comment-17687502
]
Leonid Ilyevsky commented on FLINK-30998:
-----------------------------------------
[~reta] here is my pull request:
[https://github.com/apache/flink-connector-opensearch/pull/8] .
Please let me know what is next. If you take my changes in the branch, I guess,
you will do a build. I would like to get that artifact and re-test it in my
environment, just to make sure nothing got broken.
Also I noticed on the branch you still have Flink version as 1.16.0, while in
main it is 1.16.1, so probably you are gong to correct that.
Also question: are you going to maintain two variants of this connector? One
for Opensearch 1.3.0 and another for 2.5.0? I see that the differences between
the branches are very minor.
> Add optional exception handler to flink-connector-opensearch
> ------------------------------------------------------------
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Opensearch
> Affects Versions: 1.16.1
> Reporter: Leonid Ilyevsky
> Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346).
> This makes the Flink pipeline fail. There is no way to handle the exception
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is
> done in elasticsearch connector. This way the client code has a chance to
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using
> streams on Opensearch side, and we are setting our own document IDs.
> Sometimes these IDs are duplicated; we need to ignore this situation and
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the
> batch failed (even though most of the documents were indexed, only the ones
> with duplicated IDs were rejected), and the whole flink job fails.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)