I'm using 0.90.3 before, have 1 master node, 1 search balance node and 2 
data nodes, including 5 indexes, the data directory in each data server is 
about 101g, and 24.3% is deleted documents.

The data servers have 24 cores and 32g memory and only running one 
elasticsearch process, the jvm arguments are -Xms12g -Xmx12g -Xss256k 
-Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+HeapDumpOnOutOfMemoryError.

We got hundreds request per second, average delay time was about 40ms.

And last friday we update elasticsearch to 1.1.0 version, and we didn't 
reindex the documents, just switch to 1.1 and restart all the elasticsearch 
processes, seems ok, but today i found out the dalay time become much 
higher than before, the average delay time is beyond 500ms.

I guess maybe because lucene's data structure has changed and we didn't 
reindex the documents, so there are some extra consume?

Is there anyone know the reason and solution or has the same problem, it's 
very urgent.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0fdd1420-0110-4a28-85d0-ec6b26e13ad9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to