Hi Teena,

a potential fix for the issue has been merged:
https://issues.apache.org/jira/browse/FLINK-8489
It would be great if you could check if that fixes the problem and report
back.

Thank you,
Fabian

2018-01-23 20:04 GMT+01:00 Stephan Ewen <se...@apache.org>:

> As mentioned in the issue, please check if using two different config map
> objects solves the issue.
>
> On Tue, Jan 23, 2018 at 1:32 PM, Teena Kappen // BPRISE <
> teena.kap...@bprise.com> wrote:
>
>> Thanks Fabian. I will go through it and add info if required.
>>
>>
>>
>> *From:* Fabian Hueske [mailto:fhue...@gmail.com]
>> *Sent:* 23 January 2018 15:20
>> *To:* Teena Kappen // BPRISE <teena.kap...@bprise.com>
>> *Cc:* Timo Walther <twal...@apache.org>; user@flink.apache.org
>>
>> *Subject:* Re: Multiple Elasticsearch sinks not working in Flink
>>
>>
>>
>> Hi Teena,
>>
>> I created FLINK-8489 [1] to track the issue.
>>
>> Please have a look and add information that might be relevant.
>>
>>
>>
>> Best, Fabian
>>
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-8489
>> <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-8489&data=02%7C01%7Cteena.kappen%40bprise.com%7C13d8e8f5a69346af547c08d56246ca5d%7Cb7cfea21c1ff4c0a8e2ee7ccd5d8f54b%7C0%7C0%7C636522978545765383&sdata=ZSiaFbWWlnKEgAPZMdqE%2F%2FmdiFixS2T8jDqjAtt%2Btzs%3D&reserved=0>
>>
>>
>>
>> 2018-01-18 14:16 GMT+01:00 Teena Kappen // BPRISE <
>> teena.kap...@bprise.com>:
>>
>> Hi Timo,
>>
>>
>>
>> It works fine when the second sink is a Cassandra Sink. The data gets
>> read from KafkaTopic2 and it gets written to Cassandra as expected.
>>
>>
>>
>> Regards,
>>
>> Teena
>>
>>
>>
>> *From:* Timo Walther [mailto:twal...@apache.org]
>> *Sent:* 18 January 2018 18:41
>> *To:* user@flink.apache.org
>> *Subject:* Re: Multiple Elasticsearch sinks not working in Flink
>>
>>
>>
>> Hi Teena,
>>
>> what happens if you replace the second sink with a non-ElasticSearchSink?
>> Is there the same result? Is the data read from the KafkaTopic2?
>>
>> We should determine which system is the bottleneck.
>>
>> Regards,
>> Timo
>>
>>
>> Am 1/18/18 um 9:53 AM schrieb Teena Kappen // BPRISE:
>>
>> Hi,
>>
>>
>>
>> I am running flink 1.4 in single node. My job has two Kafka consumers
>> reading from separate topics. After fetching the data, the job writes it to
>> two separate Elasticsearch sinks. So the process is like this
>>
>>
>>
>> KafkaTopic1 -> Kafkaconsumer1 -> create output record ->
>> Elasticsearchsink1
>>
>> KafkaTopic2 -> Kafkaconsumer2 -> create output record ->
>> Elasticsearchsink2
>>
>>
>>
>> Both the streams and their processing are completely unrelated. The first
>> sink works as expected and it writes the output for all input records. The
>> second sink writes to Elasticsearch only once and after that it stops
>> writing to Elasticsearch even if there is more data that gets fed into
>> Kafka. Sometimes, it does not even write once. We tested this in two other
>> jobs and the same issue is there in all of them.
>>
>>
>>
>> I have attached a sample code I had created to illustrate the issue. We
>> are using Elasticsearch version 5.6.4 and hence the dependency used is
>> ‘flink-connector-elasticsearch5_2.11’.
>>
>>
>>
>> Regards,
>>
>> Teena
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>

Reply via email to