Hi,

I am trying to index documents, each file approx ~10-20 MB. I start seeing 
memory issues if I try to index them all in a multi-threaded environment 
from a single TransportClient on one machine to a single node cluster with 
32GB ES server. It seems like the memory is an issue on the client as well 
as server side, and I probably understand and expect that :). 

I have tried tuning the heap sizes and batch sizes in Bulk APIs. However, 
am I trying to push the limits too much? One thought is to probably stream 
the data so that I do not hold it all in memory. Is it possible? Is this a 
general problem or just that my usage is wrong?

Thanks,
Sandeep

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d2612109-b31c-4127-857b-f8aa27fb0aeb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to