[ 
https://issues.apache.org/jira/browse/HDFS-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15121324#comment-15121324
 ] 

Vinayakumar B commented on HDFS-9697:
-------------------------------------

I followed the same steps and written one test and got exception during 
deletion of s2 itself. It didnt even save fsimage. But the patch posted in 
HDFS-9406, issue didnt occur.
Something else we are missing here? because, since RPC itself failing here, so 
no chance of image corruption.

{noformat}org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException):
 java.lang.NullPointerException
        at java.util.Objects.requireNonNull(Unknown Source)
        at java.util.Arrays$ArrayList.<init>(Unknown Source)
        at java.util.Arrays.asList(Unknown Source)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.storagespaceConsumedContiguous(INodeFile.java:843)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.storagespaceConsumed(INodeFile.java:813)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.updateQuotaAndCollectBlocks(FileWithSnapshotFeature.java:195)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.destroyDiffAndCollectBlocks(FileDiff.java:112)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.destroyDiffAndCollectBlocks(FileDiff.java:1)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.AbstractINodeDiffList.deleteSnapshotDiff(AbstractINodeDiffList.java:78)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.cleanFile(FileWithSnapshotFeature.java:136)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.cleanSubtree(INodeFile.java:588)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.destroyAndCollectBlocks(INodeReference.java:596)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$1.process(DirectoryWithSnapshotFeature.java:210)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$1.process(DirectoryWithSnapshotFeature.java:1)
        at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:464)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff.combinePosteriorAndCollectBlocks(DirectoryWithSnapshotFeature.java:205)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff.combinePosteriorAndCollectBlocks(DirectoryWithSnapshotFeature.java:1)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.AbstractINodeDiffList.deleteSnapshotDiff(AbstractINodeDiffList.java:91)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:731)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:801)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:215)
        at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:267)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:235)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.deleteSnapshot(FSDirSnapshotOp.java:221)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteSnapshot(FSNamesystem.java:5896)
{noformat}

> NN fails to restart due to corrupt fsimage caused by snapshot handling
> ----------------------------------------------------------------------
>
>                 Key: HDFS-9697
>                 URL: https://issues.apache.org/jira/browse/HDFS-9697
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>
> This is related to HDFS-9406, but not quite the same symptom.
> {quote}
> ERROR namenode.NameNode: Failed to start namenode.
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadINodeReference(FSImageFormatPBSnapshot.java:114)
>       at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FSImageFormatPBSnapshot$Loader.loadINodeReferenceSection(FSImageFormatPBSnapshot.java:105)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:258)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:180)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:929)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:913)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:732)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:668)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1062)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:766)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:589)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:818)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:797)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1561)
> {quote}
> A sequence that I found can reproduce the exception stack is:
> {code}
> hadoop fs -mkdir /st
> hadoop fs -mkdir /st/y
> hadoop fs -mkdir /nonst
> hadoop fs -mkdir /nonst/trash
> hdfs dfsadmin -allowSnapshot /st
> hdfs dfs -createSnapshot /st s0
> hadoop fs -touchz /st/y/nn.log
> hdfs dfs -createSnapshot /st s1
> hadoop fs -mv /st/y/nn.log /st/y/nn1.log
> hdfs dfs -createSnapshot /st s2
> hadoop fs -mkdir /nonst/trash/st
> hadoop fs -mv /st/y /nonst/trash/st
> hadoop fs -rmr /nonst/trash
> hdfs dfs -deleteSnapshot /st s1
> hdfs dfs -deleteSnapshot /st s2
> hdfs dfsadmin -safemode enter
> hdfs dfsadmin -saveNamespace
> hdfs dfsadmin -safemode leave
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to