[ 
https://issues.apache.org/jira/browse/FLINK-13689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906191#comment-16906191
 ] 

Rishindra Kumar commented on FLINK-13689:
-----------------------------------------

Hi [~fhueske],

I figured out the bug when I was running my streaming application which pulls 
data from kafka and puts it in Elasticsearch. I used *Elasticsearch6 
connector*. When the application is unable to connect to ElasticSearch it keeps 
on retrying and leaving all those unestablished clients open which is resulting 
in "TOO MANY OPEN FILES" error.

The close statement is added in previous Elasticsearch connectors(2,5) where 
Transport client is used. But in ES6 connector, RestHighLevel client is used.

 

*Plan to fix:*
 # Add IOUtils.closeQuietly(rhlClient) statement before throwing the runtime 
exception in above mentioned code.

 

*Plan to test:*

1. Start the streaming application and stop the ElasticSearch service. Make 
sure that client is getting closed and the TOO many files error is not being 
observed.

 

> Rest High Level Client for Elasticsearch6.x connector leaks threads if no 
> connection could be established
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-13689
>                 URL: https://issues.apache.org/jira/browse/FLINK-13689
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / ElasticSearch
>    Affects Versions: 1.8.1
>            Reporter: Rishindra Kumar
>            Priority: Major
>             Fix For: 1.8.2
>
>
> If the created Elastic Search Rest High Level Client(rhlClient) is 
> unreachable, Current code throws RuntimeException. But, it doesn't close the 
> client which causes thread leak.
>  
> *Current Code*
> *if (!rhlClient.ping()) {*
>      *throw new RuntimeException("There are no reachable Elasticsearch 
> nodes!");*
> *}*
>  
> *Change Needed*
> rhlClient needs to be closed.
>  
> *Steps to Reproduce*
> 1. Add the ElasticSearch Sink to the stream. Start the Flink program without 
> starting the ElasticSearch. 
> 2. Program will give error: "*Too many open files*" and it doesn't write even 
> though you start the Elastic Search later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to