Other members can correct me if I'm wrong, but I notice that when you lose 
connection with the server, the transportclient queues retries of whatever 
operations you try to execute, and it starts to queue listeners into a 
'generic' threadpool (which I read somewhere that it was unbounded). We've 
seen this problem when we thrash ES until it eventually stops responding, 
and our bulk requests start to back up and eventually cause the application 
to halt due to OOM.

I don't know exactly what your application is doing when your ES node(s) go 
down, but perhaps you can proactively stop requests to ES servers once your 
application sees the no node exception error (which you should get when ES 
goes down). You could also close the transportclient and shutdown its 
threadpool and reconnect/instantiate after a timed delay to clean up 
whatever is floating around in the transportclient. We have been able to 
solve most of our native thread issues by protecting our use of 
transportclient and doing a soft restart of this client. 


On Saturday, January 10, 2015 at 9:29:56 AM UTC-8, Subhadip Bagui wrote:
>
> Hi,
>
> I'm using elasticsearch using TransportClient for multiple operation. The 
> issue I'm facing now is if my es server goes down my client side app 
> getting OutOfMemoryError.  Getting the below exception. I had to restart my 
> tomcat every time after this to make my application up. Can some one please 
> suggest how to prevent this. 
>
>
> Jan 9, 2015 5:38:44 PM org.apache.catalina.core.StandardWrapperValve invoke
> SEVERE: Servlet.service() for servlet [spring] in context with path 
> [/aricloud] threw exception [Handler processing failed; nested exception is 
> java.lang.OutOfMemoryError: unable to create new native thread] with root 
> cause
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:640)
> at 
> java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:681)
> at 
> java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:655)
> at 
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:349)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:100)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:52)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
>
>
> Thanks,
> Subhadip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0bbbbab9-8356-4ca5-b53c-b682cbd76b1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to