4 GB isn't a lot if you have it all on one machine.
- You could start to give ES 1 GB of memory at max. (ES_HEAP_SIZE=1g) on
centos in /etc/sysconfig/elasticsearch.
- Second is to lower the field cache in elasticsearch.yml with:
indices.fielddata.cache.size: 40% (or even lower)
- Check is there is some swapping, this suf can't stand it.
- Is there a lot of data stored and kept over time? ES tries to keep as
much of the field data
in memory as possible. If you do not need a lot of history, consider to
keep data not
that long, and configure graylog the correct way for this.
Install a plugin to check how your instance of es is running.
/usr/share/elasticsearch/bin/plugin -i royrusso/elasticsearch-HQ
And check this on:
http://hostadress:9200/_plugingin/HQ/
<http://10.64.91.15:9200/_plugin/HQ/>
Good luck ;-)
Op maandag 13 april 2015 08:13:59 UTC+2 schreef [email protected]:
>
>
> OK, no takers? I'm the only one this has happened to, or I'm the only one
> running an all-in-one-node config from one of the AWS images on a medium
> machine?
>
> Could anyone at least recommend a good way to configure the memory usage
> so it can reliably fit in 4GB memory?
>
> Thanks in advance.
>
--
You received this message because you are subscribed to the Google Groups
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.