[ https://issues.apache.org/jira/browse/HDFS-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Shashikant Banerjee resolved HDFS-15568. ---------------------------------------- Fix Version/s: 3.4.0 Resolution: Fixed > namenode start failed to start when dfs.namenode.snapshot.max.limit set > ----------------------------------------------------------------------- > > Key: HDFS-15568 > URL: https://issues.apache.org/jira/browse/HDFS-15568 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots > Reporter: Nilotpal Nandi > Assignee: Shashikant Banerjee > Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > {code:java} > 11:35:05.872 AM ERROR NameNode > Failed to start namenode. > org.apache.hadoop.hdfs.protocol.SnapshotException: Failed to add snapshot: > there are already 20 snapshot(s) and the max snapshot limit is 20 > at > org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.addSnapshot(DirectorySnapshottableFeature.java:181) > at > org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addSnapshot(INodeDirectory.java:285) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:447) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:802) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:960) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:933) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) > {code} > Steps to reproduce: > ---------------------- > directory level snapshot limit set - 100 > Created 100 snapshots > deleted all 100 snapshots (in-oder) > No snapshot exist > Then, directory level snapshot limit set - 20 > HDFS restart > Namenode start failed. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org