[
https://issues.apache.org/jira/browse/HDFS-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15796797#comment-15796797
]
Gang Xie commented on HDFS-7784:
--------------------------------
The JVM setting:
-Xmx102400m
-Xms102400m
-Xmn5508m
-XX:MaxDirectMemorySize=3686m
-XX:MaxPermSize=1024m
-XX:+PrintGCApplicationStoppedTime
-XX:+UseConcMarkSweepGC
-verbose:gc
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:SurvivorRatio=6
-XX:+UseCMSCompactAtFullCollection
-XX:CMSInitiatingOccupancyFraction=70
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+CMSParallelRemarkEnabled
-XX:+UseNUMA
-XX:+CMSClassUnloadingEnabled
-XX:CMSMaxAbortablePrecleanTime=10000
-XX:TargetSurvivorRatio=80
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=100
-XX:GCLogFileSize=128m
-XX:CMSWaitDuration=8000
-XX:+CMSScavengeBeforeRemark
-XX:ConcGCThreads=16
-XX:ParallelGCThreads=16
-XX:+CMSConcurrentMTEnabled
-XX:+SafepointTimeout
-XX:MonitorBound=16384
-XX:-UseBiasedLocking
-XX:MaxTenuringThreshold=3
-XX:+ParallelRefProcEnabled
-XX:-OmitStackTraceInFastThrow
> load fsimage in parallel
> ------------------------
>
> Key: HDFS-7784
> URL: https://issues.apache.org/jira/browse/HDFS-7784
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Walter Su
> Assignee: Walter Su
> Priority: Minor
> Labels: BB2015-05-TBR
> Attachments: HDFS-7784.001.patch, test-20150213.pdf
>
>
> When single Namenode has huge amount of files, without using federation, the
> startup/restart speed is slow. The fsimage loading step takes the most of the
> time. fsimage loading can seperate to two parts, deserialization and object
> construction(mostly map insertion). Deserialization takes the most of CPU
> time. So we can do deserialization in parallel, and add to hashmap in serial.
> It will significantly reduce the NN start time.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]