Hi--


I have been using Graylogs for about a month now, and functionally, I've 
grown to depend on it and pretty happy with it.

I have, however, ran into a couple problems, such as "Journal utilization 
is too high and uncommitted messages", OutOfMemoryErrors, "ElasticSearch 
Cluster is Red" (where after some searching, finally figured out that there 
were no primary indexes available, but it wasn't clear how to recover from 
this), etc.  All of these issues feels like I am pushing my deployment 
harder than I should.

But I'm not sure what a proper configuration is for our needs.  We are 
currently pushing somewhere under 4G of log data through the system; the 
current server is a single 8G VM which is deployed via the Docker 
all-in-one config, with a few tweaks, viz. Graylog is given 2500M of RAM 
instead of 1500M (stopped the OutOfMemoryErrors I mentioned above).  All of 
the logs are pushed via graylog-collector from around 10 other servers (the 
vast majority of the data come from 3 servers) if that makes a difference.

What would be the best way to address the ElasticSearch issues?  Based on 
our usage, should we have a larger VM?  If so, how big?  More VM's?  If so, 
how many?  Or should our usage really fit fine on this configuration and 
you think our problem is elsewhere?

Is there a document somewhere which would provide such guidance?

Any insight greatly appreciated.  Thanks.



Eric

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/3faaa010-af06-45e1-9aa1-70c1d258001b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to