It seems I was suspecting wrong process of causing memory issue, it doesn't
seem to be indexing since issue happened even after we stopped it.
I found out from '_cluster/stats' and '_index/stats' api that one of the
existing index which is taking most memory -
"filter_cache" : {
"memory_size" : "252.2mb",
"memory_size_in_bytes" : 264546840,
"evictions" : 0
},
"id_cache" : {
"memory_size" : "215.4mb",
"memory_size_in_bytes" : 225963916
},
"fielddata" : {
"memory_size" : "3.2gb",
"memory_size_in_bytes" : 3479467264,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 333,
"memory" : "5.1gb",
"memory_in_bytes" : 5561471705
}
I think to avoid confusion, I will open a separate thread to ask about it.
On Friday, February 14, 2014 11:29:55 AM UTC-5, Binh Ly wrote:
>
> Don't know if this might help, but you can limit the max size of your
> fielddata cache as well as the expiry of the items in that cache:
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html
>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a9ca91f4-5bca-4dec-89aa-6f9a9cfe80dc%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.