[
https://issues.apache.org/jira/browse/FLINK-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15690475#comment-15690475
]
ASF GitHub Bot commented on FLINK-4491:
---------------------------------------
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/2790#discussion_r89341178
--- Diff:
flink-streaming-connectors/flink-connector-elasticsearch2/src/main/java/org/apache/flink/streaming/connectors/elasticsearch2/ElasticsearchSink.java
---
@@ -244,12 +260,7 @@ public void close() {
}
if (hasFailure.get()) {
- Throwable cause = failureThrowable.get();
- if (cause != null) {
- throw new RuntimeException("An error occured in
ElasticsearchSink.", cause);
- } else {
- throw new RuntimeException("An error occured in
ElasticsearchSink.");
- }
+ LOG.error("Some documents failed while indexing to
Elasticsearch: " + failureThrowable.get());
--- End diff --
I would suggest to add a debug log statement as well logging the full stack
trace.
Also, in the other connectors we have a flag that allows the user to
control whether an error should be logged or fail the connector. I would
suggest to add this here as well.
> Handle index.number_of_shards in the ES connector
> -------------------------------------------------
>
> Key: FLINK-4491
> URL: https://issues.apache.org/jira/browse/FLINK-4491
> Project: Flink
> Issue Type: Improvement
> Components: Streaming Connectors
> Affects Versions: 1.1.0
> Reporter: Flavio Pompermaier
> Priority: Minor
> Labels: elasticsearch, streaming
>
> At the moment is not possible to configure the number of shards if an index
> does not already exists on the Elasticsearch cluster. It could be a great
> improvement to handle the index.number_of_shards (passed in the configuration
> object). E.g.:
> {code:java}
> Map<String, String> config = Maps.newHashMap();
> config.put("bulk.flush.max.actions", "1");
> config.put("cluster.name", "my-cluster-name");
> config.put("index.number_of_shards", "1");
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)