Upgrade to a newer version of ES, also upgrade java, and if you can,
increase your heap.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: [email protected]
web: www.campaignmonitor.com


On 17 June 2014 21:00, Kevin Qi <[email protected]> wrote:

> *Hi,*
> *We are running Elasticsearch 0.90.7 on Linux sever (1 node cluster).*
> *From time to time, Elasticsearch stop responding and the issue looks
> related to the Garbage Collector. The log file is shown blow:*
>
> *[2014-06-16 09:35:48,563][WARN ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674153][113273] duration [12.1s], collections
> [1]/[12.3s], total [12.1s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [158.2mb]->[95.3mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:35:58,800][INFO ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674154][113274] duration [9.9s], collections
> [1]/[10.2s], total [9.9s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [95.3mb]->[58.6mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:36:11,236][WARN ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674155][113275] duration [12s], collections
> [1]/[12.4s], total [12s]/[17.9h], memory [7.2gb]->[7.3gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [58.6mb]->[138.1mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:36:23,879][WARN ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674156][113276] duration [12.3s], collections
> [1]/[12.6s], total [12.3s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [138.1mb]->[113mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:36:34,043][INFO ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674157][113277] duration [9.8s], collections
> [1]/[10.1s], total [9.8s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [113mb]->[79mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
> Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:36:46,486][WARN ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674158][113278] duration [12.1s], collections
> [1]/[12.4s], total [12.1s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
> all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [79mb]->[107.2mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
> Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:36:56,649][INFO ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674159][113279] duration [9.9s], collections
> [1]/[10.1s], total [9.9s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
> {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [107.2mb]->[68.7mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
> *[2014-06-16 09:37:08,995][WARN ][monitor.jvm              ] [node01]
> [gc][ConcurrentMarkSweep][1674160][113280] duration [12s], collections
> [1]/[12.3s], total [12s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
> {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
> [68.7mb]->[79.7mb]/[665.6mb]}{[Par Survivor Space]
> [0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
> [34.7mb]->[34.7mb]/[82mb]}*
>
> *The garbage collector logs long passes (around 10 seconds). Our system
> has total memory of 32G and we set the ES_HEAP_SIZA to be 8G. *
> *We are almost sure this issue comes from long GC run.*
> *What can we do to prevent this behavior and run ES smoothly ?*
>
> *Thanks,*
>
> *Kevin*
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624bDWg-uQ6wM1nFM8byCZXXbHF9V3L34wV1%2BmmS0LLNoWA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to