[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687956#comment-17687956
 ] 

Leonid Ilyevsky commented on FLINK-30998:
-----------------------------------------

Hi [~reta] ,

Thanks for your help.

The commit that you mentioned where you upgraded 1.16.0 to 1.16.1 was applied 
to main branch only, the 
[https://github.com/apache/flink-connector-opensearch/blob/dependabot/maven/flink-connector-opensearch/org.opensearch-opensearch-2.5.0/pom.xml]
 still shows 1.16.0. I guess, this is not a problem, as ultimately you are 
going to merge all updates to main branch.

I know that the connector built from main branch will work with 2.x cluster, 
but only if in my project I explicitly upgrade opensearch version to 2.5.0 . 
Again, this is not a problem at all. Maybe you just should mention this fact in 
README.

Without setting opensearch version to 2.5.0 it failed on parsing some responses 
from the cluster, complaining about one missing field.

Also, a minor technical difficulty with unit tests. As I mentioned, I had to do 
some small fixes there to make it compile with opensearch 2.5.0.

> Add optional exception handler to flink-connector-opensearch
> ------------------------------------------------------------
>
>                 Key: FLINK-30998
>                 URL: https://issues.apache.org/jira/browse/FLINK-30998
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Opensearch
>    Affects Versions: 1.16.1
>            Reporter: Leonid Ilyevsky
>            Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to