Hi all,

I am trying to run load test with ES to identify system requirements and 
optimum configurations with respect to my load. I have 10 data publishing 
tasks and 100 data consuming tasks in my load test. 
Data publisher : Each publisher publishes data in every minute and it 
publishes 1700 records as a batch using java bulk API.
Data consumer : Each consumer runs in every minute and run a query with 
randomly selected aggregation type(average, minimum or maximum) for a 
selected data for last hour.
example query that consumer run in every minute : 

SearchResponse searchResponse = 
client.prepareSearch("myIndex").setTypes("myRecordType")
.setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchQuery("filed1", 
"value1"),FilterBuilders.rangeFilter("field2").from("value2").to("value3")))
.addAggregation(AggregationBuilders.avg("AVG_NAME").field("field3")).execute().actionGet();

I have run above test case in my local machine without ES clustering and it 
was run around 4 hours without any errors. Memory consumption of ES was 
under 2GB. After that I have run same test case in three node ES 
cluster(EC2 instances) and ES has ended up with out of memory error after 
around 5 minutes in that case. My all three instances have following same 
hardware configurations,

8GB RAM
80GB SSD hard disk
4 core CPU

Instance 1
Elasticsearch server (4GB heap)
10 data publishers which will publish data to the local ES server

Instance 2
Elasticsearch server (8GB heap)
10 consumers which will query data from the local ES server

Instance 3
Elasticsearch server (4GB heap)

I'm using ES 1.5.1 version with jdk 1.8.0_40.

My ES cluster have following custom configurations (all other 
configurations are default configurations)

bootstrap.mlockall: true
indices.fielddata.cache.size: "30%"
indices.cache.filter.size: "30%
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["host1:9300","host2:9300","host3:9300"]

I believe I have missed something here regarding ES clustering 
configuration. Please help me to identify what I have missed here. I want 
to reduce the memory utilization as much as possible, that's why I have 
given only 4GB heap to ES. If there is a way to reduce the memory 
consumption by reducing read consistency level that option is also OK for 
me. I have increased the refresh interval for my index, but still no luck :(

Thanks
Manjula

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9f8ebc90-dd37-4da7-97bd-3c0d1c00165c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to