look at the per-thread stats (hit 'H' in top) and see if you have any threads on the logstash machine (or the rsyslog machine) that are hitting 100%, if so, that's your bottleneck, even if you have other cores idle.

the fact that two instances of logstash show worse performance than one really makes it look to me like logstash is the problem.

On Tue, 30 May 2017, phrogz via rsyslog wrote:

Date: Tue, 30 May 2017 16:42:56 +0000
From: phrogz via rsyslog <[email protected]>
To: rsyslog-users <[email protected]>
Cc: phrogz <[email protected]>
Subject: Re: [rsyslog] Slow log processing

So after an update of Logstash, I still have the issue.

On Logstash the input is not queuing, the system load is between 1 and 4 (on 10 
cores machines).


I also tried to disable one Logstash, and here are the differences:

With two Logstash, average incoming min-max events/sec: 500-847 ; System load: 
3.18

With one Logstash, average incoming min-max events/sec: 1000-1687 ; System 
load: 4.57

Logstash doesn't seems to be bottleneck.


On rSyslog the RebindInterval is set, I also tried to decrease/increase the 
queue.DequeueBatchSize and the RebindInterval. But it doesn't solve the issue.

I'll try to disable TLS (I can't do it now).
Or maybe move the "relay" flow to another dedicated server to avoid two output 
with the same destination.

Thanks,

Ludovic


________________________________
De : rsyslog <[email protected]> de la part de phrogz via rsyslog 
<[email protected]>
Envoyé : vendredi 26 mai 2017 11:37:27
À : phrogz via rsyslog
Cc : phrogz
Objet : Re: [rsyslog] Slow log processing

Thank you both for your answers, I'll update to Logstash 5.4.0, the persistent 
input queue are now GA! And normally it will be able to handle burst. Plus I'll 
be able to see the input queue status via the Logstash API.

I'll keep you informed.

Ludovic

-----Message d'origine-----
De : David Lang [mailto:[email protected]]
Envoyé : mardi 23 mai 2017 09:26
À : phrogz via rsyslog <[email protected]>
Cc : phrogz <[email protected]>
Objet : Re: [rsyslog] Slow log processing

well, the fundamental problem is that logstash is not keeping up, so rsyslog 
has it's internal queues build up.

once the queues are full, rsyslog only accepts new messages at the rate that 
the queus can be drained.

unless you set rebindinterval, you will only be making one connection to 
logstash and the load balancer will not have a chance to send any traffic to 
the second instance. I'm not sure how much this will matter, as logstash 
doesn't have any internal queueing, so the normal strategy of sending a burst 
of traffic to one, disconnecting and reconnecting to let the load balancer work 
and send another burst of traffic may not really work as the logstash instances 
have no way of taking a bust and working through the backlog.

look at your logstash instances and you will probably find that one is maxed 
out.

David Lang
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to