sodonnel commented on a change in pull request #3512:
URL: https://github.com/apache/hadoop/pull/3512#discussion_r721253275
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
##########
@@ -279,6 +283,9 @@ void loadINodeDirectorySection(InputStream in) throws
IOException {
INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
for (long id : e.getChildrenList()) {
INode child = dir.getInode(id);
+ if (child.isDirectory()) {
Review comment:
We are only incrementing here if its a directory. The inode table /
section contains an entry for every file and directory in the system.
The the directory section is what links them all together into the parent
child relationship, so it should contain about the same number of entries as
inodes.
I am not sure if it makes sense to just count the directories here, as we
have already counted them in the inode section.
Why do you want to count just directories? Would it make more sense to count
each entry and child entry to give an idea of the number of entries processed
by each parallel section?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]