I'm looking to get some confirmation here ... 

We are running 1.1.2 with journaling on. 

We got ourselves into a situation where poor Elasticsearch latency caused 
our buffers and our journal to fill up. 

We addressed the ES problem (turned off throttling altogether), and Graylog 
started draining out pretty quickly. 

However, because we were looking at hours of drain time,  I started playing 
with the tunables to increase the the # of threads for processing and 
output, and increased the ringbuffer size to see if that would help.

In the end, it didn't. 

I believe it's because Graylog's JournalReader is single-threaded (I guess 
to preserve ordering?); because of this the entire system will be 
constrained by the throughput the JournalReader is able to achieve. We had 
plenty of resources on the system: plenty of iops, cpu, memory headroom - 
but couldn't go any faster. 

Currently thinking if I am right about this, that we need to run more 
Graylog instances in parallel so we can parallelize reads from journal and 
catch up faster. 

Am I right? Is there anything to do to make it faster.... 

dave


-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/2264f4eb-623e-400e-9580-090def842e13%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to