It fails because it cannot keep its message cache in memory. We have an ASA 
that produces around 5-6k msgs/s. I have 2 dedicated vms for elasticsearch 
and 1 for graylog2-server. 32G memory in each. I have had to tune memory 
alot because both elasticsearch and graylog2-server crash when they run 
out, because it is recommended to set the heap_size.
My graylog2-server has heap of 8G to manage. I also have alot of custom drl 
rules to format and extract the cisco asa and ace logs we have running 
through.

Here are the tunings I use in graylog2:
output_batch_size = 60000
processbuffer_processors = 40
outputbuffer_processors = 60
ring_size = 8192

My tunings in elasticsearch (they may be obsolete, but you can try and 
experiment):
index.translog.flush_threshold_ops: 50000
index.refresh_interval: 15s

#index.cache.field.type: soft
index.cache.field.max_size: 10000
threadpool.bulk.queue_size: 500

This makes elasticsearch not update the indexes so often, so it is tuned to 
put alot of msgs in.

Brds. Martin

On Friday, 8 August 2014 09:11:27 UTC+2, Alex Lakustov wrote:
>
> Good day, 
>
> We have some problem with Graylog 0.20.6. It fails with 10k messages from 
> network device (Cisco ASA) per second.
>
> System:
> OS: Ubuntu 12.4 x64
> CPU: 8 cores
> RAM: 30 GB (24 GB for Elastic Search) 
>
>
> Any suggestions or idea?
>
>
> Thank you in advance.
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to