Group,

I've been running a setup as a testbed for a couple of months (OVA of 
1.2.2) that blew up rather spectacularly a couple of weeks ago.  It ran out 
of disk space, unallocated shards, This: 
https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html
 
doesn't help, because I'm a single node and can't allocate replicas back to 
the node they're already on, etc. etc.  It's all my own fault really, since 
it was a testbed, I didn't pay any attention to how ES worked or how to 
tune the settings for a single node setup.  Since it appears broken beyond 
repair, I'm planning on building a new node to replace it.  I've done much 
more reading this time and want a sanity check of what I'm planning for the 
new node.

CentOS7
RPM distribution of either the new version2 beta, or the latest stable
2 Processor cores (it's a VM, so I can allocate more CPU/RAM/HD later if 
needed)
4Gigs of RAM
100GB HD space

ES settings
Single Node 
4 Shards per index
0 Replicas
Limited by: Size
Maximum size of index 2Gb
Maximum number of indexes 20

Will be monitoring Server HD space with 24/7 active alarming to prevent 
another problem with the HD filling up (or reaching the high water mark)

The old node processed on average 30msgs/s on just one UDP (syslog) input.  
I expect the new node to do about the same, but could grow to double that 
in the first several months.

Anyone see any problems, concerns, anything it seems I might be badly 
misunderstanding?  Maybe advice for other setting to watch out for to avoid 
the problem of unallocated or orphaned shards on a single-node cluster?

Thanks for the help

Casey

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/674f1e3b-ac44-48ad-a78e-f07eca268603%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to