People, I have a Graylog 1.3 server in just one Linux box (Debian 8), so I 
mean I have one Elasticsearch node.

Nowadays I'm receiveing about 4000/6000 logs/second. I had to increase the 
memory heap size of JVM, and used CPU x 10  and RAM x 40GB and after that 
everything seems OK, because I have near 200/800 unprocessed messages as 
maximum everytime.

When do you recommend to scale to more Elasticsearch nodes or to have 
diferent MongoDB's or somethinh like that???

Is there a logs/seg threshold meaning I have to scale to a distributed 
architecture???

Thanks a lot!!!

Roberto 

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/46b85a17-54fb-4f99-8493-fdfa5add8c77%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to