rnblough commented on code in PR #9967:
URL: https://github.com/apache/ozone/pull/9967#discussion_r2977888719
##########
hadoop-ozone/dist/src/shell/ozone/ozone-functions.sh:
##########
@@ -1525,8 +1525,8 @@ function ozone_add_default_gc_opts
if [[ ! "$OZONE_OPTS" =~ "-XX" ]] ; then
OZONE_OPTS="${OZONE_OPTS} -XX:ParallelGCThreads=8"
if [[ "$java_major_version" -lt 15 ]]; then
- OZONE_OPTS="${OZONE_OPTS} -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled"
- ozone_error "No '-XX:...' jvm parameters are set. Adding safer GC
settings '-XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled' to the
OZONE_OPTS"
+ OZONE_OPTS="${OZONE_OPTS} -XX:+UseConcMarkSweepGC -XX:NewRatio=3
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled"
Review Comment:
Happily there isn't one. The root cause of the problem is an ergonomic
detail unique to CMS, from which G1GC does not suffer. This isn't even a bug,
it is intended behavior, just from 2004 when there was nothing like modern
scale challenges or the individual server resources to meet them.
If we switch to the question of "can G1GC performance be improved with this
property" the answer is no as I understand it; G1GC depends on adaptive sizing
of the young gen for meeting the pause goals that are set. Any property that
fixes the young gen heap size, whether NewRatio, Xmn, or NewSize=MaxNewSize,
would stop that mechanism from working. This wouldn't break G1GC, but it would
likely mean that we would start consistently exceeding its pause target above
some threshold.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]