I found out that the rejections on ES are retried by logstash after a short 
relay.  Increasing the queue by too much costs more memory in ES, which takes 
away from merges, searches, etc..

I increased threadpool.bulk.queue_size from 50 to 100, I see no lost messages 
due to the rejections.


From: Robert Gardam <[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Thursday, August 14, 2014 at 5:55 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: bulk thread pool rejections

Did you resolve this issue? I was seeing the exact thing in my setup. I also 
have my bulk messages set to 5k in logstash. Originally I had set the thread 
pool to unlimited but this apparently causes some strange issues with stability.


On Tuesday, April 8, 2014 5:00:32 PM UTC+2, shift wrote:
I tried lowering the logstash threads, but I am unable to keep up with the 
incoming message rate.  It is important that I index messages in real time, but 
equally important that I am not losing messages.  :)

To keep indexing real time I need 200 logstash output threads with a flush size 
of 5000 sending bulk messages to each node in the elasticsearch cluster, but I 
am concerned that I am losing messages with these rejections.

I increased the queue size to 500, I will see if this helps.


On Wednesday, April 2, 2014 11:34:43 AM UTC-4, Drew Raines wrote:
shift wrote:

> I am seeing a high number of rejections for the bulk thread pool
> on a 32 core system.  Should I leave the thread pool size fixed
> to the # of cores and the default queue size at 50?  Are these
> rejections re-processed?
>
> From my clients sending bulk documents (logstash), do I need to
> limit the number of connections to 32?  I currently have 200
> output threads to each elasticsearch node.

The rejections are telling you that ES's bulk thread pool is busy
and it can't enqueue any more to wait for an open thread.  They
aren't retried.  The exception your client gets is the final word
for that request.

Lower your logstash threads to 16 or 32, monitor rejections, and
gradually raise.  You could also increase the queue size, but keep
in mind that's only useful to handle spikes.  You probably don't
want to keep thousands around waiting since they take resources.

Drew

>
> "bulk" : {
>           "threads" : 32,
>          * "queue" : 50,*
>           "active" : 32,
>          * "rejected" : 12592108,*
>           "largest" : 32,
>           "completed" : 584407554
> }
>
> Thanks!  Any feedback is appreciated.

--
You received this message because you are subscribed to a topic in the Google 
Groups "elasticsearch" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/elasticsearch/6oNFDzWZv98/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
[email protected]<mailto:[email protected]>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b84ceec3-145c-4129-9691-af1ad791aa57%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/b84ceec3-145c-4129-9691-af1ad791aa57%40googlegroups.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0FCFDBE5A17E804A9FCD5500B8E1C53FB224D4E7%40atl1ex10mbx2.corp.etradegrp.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to