jojochuang commented on a change in pull request #1028: HDFS-14617 - Improve
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r311837663
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
##########
@@ -255,14 +345,28 @@ public int compare(FileSummary.Section s1,
FileSummary.Section s2) {
case INODE: {
currentStep = new Step(StepType.INODES);
prog.beginStep(Phase.LOADING_FSIMAGE, currentStep);
- inodeLoader.loadINodeSection(in, prog, currentStep);
+ stageSubSections = getSubSectionsOfName(
+ subSections, SectionName.INODE_SUB);
+ if (loadInParallel && (stageSubSections.size() > 0)) {
+ inodeLoader.loadINodeSectionInParallel(executorService,
+ stageSubSections, summary.getCodec(), prog, currentStep);
+ } else {
+ inodeLoader.loadINodeSection(in, prog, currentStep);
+ }
}
break;
case INODE_REFERENCE:
snapshotLoader.loadINodeReferenceSection(in);
break;
case INODE_DIR:
- inodeLoader.loadINodeDirectorySection(in);
+ stageSubSections = getSubSectionsOfName(
+ subSections, SectionName.INODE_DIR_SUB);
+ if (loadInParallel && stageSubSections.size() > 0) {
Review comment:
you probably need to put a sample fsimage in old format in the resource
directory (hadoop-hdfs-project/hadoop-hdfs/src/test/resources/), and load it
in a unit test
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]