Github user aljoscha commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1040#discussion_r37698000
  
    --- Diff: docs/apis/streaming_guide.md ---
    @@ -1661,6 +1674,165 @@ More about Kafka can be found 
[here](https://kafka.apache.org/documentation.html
     
     [Back to top](#top)
     
    +### Elasticsearch
    +
    +This connector provides a Sink that can write to an
    +[Elasticsearch](https://elastic.co/) Index. To use this connector, add the
    +following dependency to your project:
    +
    +{% highlight xml %}
    +<dependency>
    +  <groupId>org.apache.flink</groupId>
    +  <artifactId>flink-connector-elasticsearch</artifactId>
    +  <version>{{site.version }}</version>
    +</dependency>
    +{% endhighlight %}
    +
    +Note that the streaming connectors are currently not part of the binary
    +distribution. See
    
+[here](cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
    +for information about how to package the program with the libraries for
    +cluster execution.
    +
    +#### Installing Elasticsearch
    +
    +Instructions for setting up an Elasticsearch cluster can be found
    
+[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
    +Make sure to set and remember a cluster name. This must be set when
    +creating a Sink for writing to your cluster
    +
    +#### Elasticsearch Sink
    +The connector provides a Sink that can send data to an Elasticsearch Index.
    +
    +The sink can use two different methods for communicating with 
Elasticsearch:
    +
    +1. An embedded Node
    +2. The TransportClient
    +
    +See 
[here](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html)
    +for information about the differences between the two modes.
    +
    +This code shows how to create a sink that uses an embedded Node for
    +communication:
    +
    +<div class="codetabs" markdown="1">
    +<div data-lang="java" markdown="1">
    +{% highlight java %}
    +DataStream<String> input = ...;
    +
    +Map<String, String> config = Maps.newHashMap();
    +// This instructs the sink to emit after every element, otherwise they 
would be buffered
    +config.put("bulk.flush.max.actions", "1");
    +config.put("cluster.name", "my-cluster-name");
    +
    +input.addSink(new ElasticsearchSink<>(config, new 
IndexRequestBuilder<String>() {
    +    @Override
    +    public IndexRequest createIndexRequest(String element, RuntimeContext 
ctx) {
    --- End diff --
    
    The `ElasticsearchSink` is rich already. The `IndexRequestBuilder` is more 
like a souped up key selector that gives the user the power to specify in great 
detail how they want their element added to Elasticsearch. I admit the function 
signature is a bit strange but I didn't want to go full-blown RichFunction for 
the `IndexRequestBuilder`.
    
    Should we change it? Because then users would also think that they could 
make it stateful and all the other things that come with rich functions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to