[jira] [Resolved] (HDFS-16187) SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing
[ https://issues.apache.org/jira/browse/HDFS-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-16187. Fix Version/s: 1.3.0 Resolution: Fixed > SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN > restarts with checkpointing > --- > > Key: HDFS-16187 > URL: https://issues.apache.org/jira/browse/HDFS-16187 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Srinivasu Majeti >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 1.3.0 > > Time Spent: 4h > Remaining Estimate: 0h > > The below test shows the snapshot diff between across snapshots is not > consistent with Xattr(EZ here settinh the Xattr) across NN restarts with > checkpointed FsImage. > {code:java} > @Test > public void testEncryptionZonesWithSnapshots() throws Exception { > final Path snapshottable = new Path("/zones"); > fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(), > true); > dfsAdmin.allowSnapshot(snapshottable); > dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH); > fs.createSnapshot(snapshottable, "snap1"); > SnapshotDiffReport report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > Assert.assertEquals(0, report.getDiffList().size()); > report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > System.out.println(report); > Assert.assertEquals(0, report.getDiffList().size()); > fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); > fs.saveNamespace(); > fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); > cluster.restartNameNode(true); > report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > Assert.assertEquals(0, report.getDiffList().size()); > }{code} > {code:java} > Pre Restart: > Difference between snapshot snap1 and current directory under directory > /zones: > Post Restart: > Difference between snapshot snap1 and current directory under directory > /zones: > M .{code} > The side effect of this behavior is : distcp with snapshot diff would fail > with below error complaining that target cluster has some data changed . > {code:java} > WARN tools.DistCp: The target has been modified since snapshot x > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16187) SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing
Shashikant Banerjee created HDFS-16187: -- Summary: SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing Key: HDFS-16187 URL: https://issues.apache.org/jira/browse/HDFS-16187 Project: Hadoop HDFS Issue Type: Bug Components: snapshots Reporter: Srinivasu Majeti Assignee: Shashikant Banerjee The below test shows the snapshot diff between across snapshots is not consistent with Xattr(EZ here settinh the Xattr) across NN restarts with checkpointed FsImage. {code:java} @Test public void testEncryptionZonesWithSnapshots() throws Exception { final Path snapshottable = new Path("/zones"); fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(), true); dfsAdmin.allowSnapshot(snapshottable); dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH); fs.createSnapshot(snapshottable, "snap1"); SnapshotDiffReport report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); System.out.println(report); Assert.assertEquals(0, report.getDiffList().size()); fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); fs.saveNamespace(); fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); cluster.restartNameNode(true); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); }{code} {code:java} Pre Restart: Difference between snapshot snap1 and current directory under directory /zones: Post Restart: Difference between snapshot snap1 and current directory under directory /zones: M .{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)
[ https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388168#comment-17388168 ] Shashikant Banerjee commented on HDFS-16144: The patch v3 looks good . +1 > Revert HDFS-15372 (Files in snapshots no longer see attribute provider > permissions) > --- > > Key: HDFS-16144 > URL: https://issues.apache.org/jira/browse/HDFS-16144 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-16144.001.patch, HDFS-16144.002.patch, > HDFS-16144.003.patch > > > In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. > When a user accesses a file in a snapshot, if an attribute provider is > configured it would see the original file path (ie no .snapshot folder) in > Hadoop 2, but it would see the snapshot path in Hadoop 3. > HDFS-15372 changed this back, but I noted at the time it may make sense for > the provider to see the actual snapshot path instead. > Recently we discovered HDFS-16132 where the HDFS-15372 does not work > correctly. At this stage I believe it is better to revert HDFS-15372 as the > fix to this issue is probably not trivial and allow providers to see the > actual path the user accessed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)
[ https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387869#comment-17387869 ] Shashikant Banerjee commented on HDFS-16144: +1 > Revert HDFS-15372 (Files in snapshots no longer see attribute provider > permissions) > --- > > Key: HDFS-16144 > URL: https://issues.apache.org/jira/browse/HDFS-16144 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-16144.001.patch, HDFS-16144.002.patch > > > In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. > When a user accesses a file in a snapshot, if an attribute provider is > configured it would see the original file path (ie no .snapshot folder) in > Hadoop 2, but it would see the snapshot path in Hadoop 3. > HDFS-15372 changed this back, but I noted at the time it may make sense for > the provider to see the actual snapshot path instead. > Recently we discovered HDFS-16132 where the HDFS-15372 does not work > correctly. At this stage I believe it is better to revert HDFS-15372 as the > fix to this issue is probably not trivial and allow providers to see the > actual path the user accessed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff
[ https://issues.apache.org/jira/browse/HDFS-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-16145: --- Description: Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion on the target if the rename target is to a directory which is exluded by the filter. But, in cases, where files/subdirs created/modified prior to the Rename post the old snapshot will still be present as modified/created entries in the final copy list. Since, the parent diretory is marked for deletion, these subsequent create/modify entries should be ignored while building the final copy list. With such cases, when the final copy list is built, distcp tries to do a lookup for each create/modified file in the newer snapshot which will fail as, the parent dir is already moved to a new location in later snapshot. {code:java} sudo -u kms hadoop key create testkey hadoop fs -mkdir -p /data/gcgdlknnasg/ hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/ hadoop fs -mkdir -p /dest/gcgdlknnasg hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg hdfs dfs -mkdir /data/gcgdlknnasg/dir1 hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/ drwxrwxrwt - hdfs supergroup 0 2021-07-16 14:05 /data/gcgdlknnasg/.Trash drwxr-xr-x - hdfs supergroup 0 2021-07-16 13:07 /data/gcgdlknnasg/dir1 [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/ hdfs dfs -rm -r /data/gcgdlknnasg/dir1/ hdfs dfs -mkdir /data/gcgdlknnasg/dir1/ ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the replication schedule. You get into below error and failure of the BDR job. 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - java.io.FileNotFoundException: File does not exist: /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487) …….. {code} was: Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion on the target if the rename target is to a directory which is exluded by the filter. But, in cases, where files/subdirs created/modified prior to the Rename post the old snapshot will still be present as modified/created entries in the final copy list. Since, the parent diretory is marked for deletion, these subsequent create/modify entries should be ignored while building the final copy list. With such cases, when the final copy list is built, distcp tries to do a lookup for each create/modified file in the l\newer snapshot which will fail as, the parent dir is already moved to a new location in later snapshot. {code:java} sudo -u kms hadoop key create testkey hadoop fs -mkdir -p /data/gcgdlknnasg/ hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/ hadoop fs -mkdir -p /dest/gcgdlknnasg hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg hdfs dfs -mkdir /data/gcgdlknnasg/dir1 hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/ drwxrwxrwt - hdfs supergroup 0 2021-07-16 14:05 /data/gcgdlknnasg/.Trash drwxr-xr-x - hdfs supergroup 0 2021-07-16 13:07 /data/gcgdlknnasg/dir1 [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/ hdfs dfs -rm -r /data/gcgdlknnasg/dir1/ hdfs dfs -mkdir /data/gcgdlknnasg/dir1/ ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the replication schedule. You get into below error and failure of the BDR job. 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - java.io.FileNotFoundException: File does not exist: /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487) …….. {code} > CopyListing fails with FNF exception with snapshot diff > --- > > Key: HDFS-16145 > URL: https://issues.apache.org/jira/browse/HDFS-16145 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Distcp with snapshotdiff and
[jira] [Created] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff
Shashikant Banerjee created HDFS-16145: -- Summary: CopyListing fails with FNF exception with snapshot diff Key: HDFS-16145 URL: https://issues.apache.org/jira/browse/HDFS-16145 Project: Hadoop HDFS Issue Type: Bug Components: distcp Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion on the target if the rename target is to a directory which is exluded by the filter. But, in cases, where files/subdirs created/modified prior to the Rename post the old snapshot will still be present as modified/created entries in the final copy list. Since, the parent diretory is marked for deletion, these subsequent create/modify entries should be ignored while building the final copy list. With such cases, when the final copy list is built, distcp tries to do a lookup for each create/modified file in the l\newer snapshot which will fail as, the parent dir is already moved to a new location in later snapshot. {code:java} sudo -u kms hadoop key create testkey hadoop fs -mkdir -p /data/gcgdlknnasg/ hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/ hadoop fs -mkdir -p /dest/gcgdlknnasg hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg hdfs dfs -mkdir /data/gcgdlknnasg/dir1 hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/ drwxrwxrwt - hdfs supergroup 0 2021-07-16 14:05 /data/gcgdlknnasg/.Trash drwxr-xr-x - hdfs supergroup 0 2021-07-16 13:07 /data/gcgdlknnasg/dir1 [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/ [root@nightly62x-1 logs]# hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/ hdfs dfs -rm -r /data/gcgdlknnasg/dir1/ hdfs dfs -mkdir /data/gcgdlknnasg/dir1/ ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the replication schedule. You get into below error and failure of the BDR job. 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - java.io.FileNotFoundException: File does not exist: /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487) …….. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16132) SnapshotDiff report fails with invalid path assertion with external Attribute provider
[ https://issues.apache.org/jira/browse/HDFS-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-16132: --- Description: The issue can be reproduced with the below unit test: {code:java} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java index 512d1029835..27b80882766 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java @@ -36,6 +36,7 @@ import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.DFSTestUtil; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.Lists; @@ -89,7 +90,7 @@ public void checkPermissionWithContext( AuthorizationContext authzContext) throws AccessControlException { if (authzContext.getAncestorIndex() > 1 && authzContext.getInodes()[1].getLocalName().equals("user") - && authzContext.getInodes()[2].getLocalName().equals("acl")) { + && authzContext.getInodes()[2].getLocalName().equals("acl") || runPermissionCheck) { this.ace.checkPermissionWithContext(authzContext); } CALLED.add("checkPermission|" + authzContext.getAncestorAccess() @@ -598,6 +599,55 @@ public Void run() throws Exception { return null; } }); + } + @Test + public void testAttrProviderSeesResolvedSnapahotPaths1() throws Exception { + runPermissionCheck = true; + FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0)); + DistributedFileSystem hdfs = miniDFS.getFileSystem(); + final Path parent = new Path("/user"); + hdfs.mkdirs(parent); + fs.setPermission(parent, new FsPermission(HDFS_PERMISSION)); + final Path sub1 = new Path(parent, "sub1"); + final Path sub1foo = new Path(sub1, "foo"); + hdfs.mkdirs(sub1); + hdfs.mkdirs(sub1foo); + Path f = new Path(sub1foo, "file0"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + hdfs.allowSnapshot(parent); + hdfs.createSnapshot(parent, "s0"); + + f = new Path(sub1foo, "file1"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + f = new Path(sub1foo, "file2"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + + final Path sub2 = new Path(parent, "sub2"); + hdfs.mkdirs(sub2); + final Path sub2foo = new Path(sub2, "foo"); + // mv /parent/sub1/foo to /parent/sub2/foo + hdfs.rename(sub1foo, sub2foo); + + hdfs.createSnapshot(parent, "s1"); + hdfs.createSnapshot(parent, "s2"); + + final Path sub3 = new Path(parent, "sub3"); + hdfs.mkdirs(sub3); + // mv /parent/sub2/foo to /parent/sub3/foo + hdfs.rename(sub2foo, sub3); + + hdfs.delete(sub3, true); + UserGroupInformation ugi = + UserGroupInformation.createUserForTesting("u1", new String[] { "g1" }); + ugi.doAs(new PrivilegedExceptionAction() { + @Override + public Void run() throws Exception { + FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0)); + ((DistributedFileSystem)fs).getSnapshotDiffReport(parent, "s1", "s2"); + CALLED.clear(); + return null; + } + }); } } {code} It fails with the below error when executed: {code:java} org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute path required, but got 'foo'org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute path required, but got 'foo' at org.apache.hadoop.hdfs.server.namenode.INode.checkAbsolutePath(INode.java:838) at org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:813) at org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolveFromRoot(INodesInPath.java:154) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:447) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSubAccess(FSPermissionChecker.java:507) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:403) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:417) at org.apache.hadoop.hdfs.server.namenode.TestINodeAttributeProvider$MyAuthorizationProvider$MyAccessControlEnforcer.checkPermissionWithContext(TestINodeAttributeProvider.java:94) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:297) at
[jira] [Created] (HDFS-16132) SnapshotDiff report fails with invalid path assertion with external Attribute provider
Shashikant Banerjee created HDFS-16132: -- Summary: SnapshotDiff report fails with invalid path assertion with external Attribute provider Key: HDFS-16132 URL: https://issues.apache.org/jira/browse/HDFS-16132 Project: Hadoop HDFS Issue Type: Bug Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee The issue can be reproduced with the below unit test: {code:java} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java index 512d1029835..27b80882766 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java @@ -36,6 +36,7 @@ import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.DFSTestUtil; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.Lists; @@ -89,7 +90,7 @@ public void checkPermissionWithContext( AuthorizationContext authzContext) throws AccessControlException { if (authzContext.getAncestorIndex() > 1 && authzContext.getInodes()[1].getLocalName().equals("user") - && authzContext.getInodes()[2].getLocalName().equals("acl")) { + && authzContext.getInodes()[2].getLocalName().equals("acl") || runPermissionCheck) { this.ace.checkPermissionWithContext(authzContext); } CALLED.add("checkPermission|" + authzContext.getAncestorAccess() @@ -598,6 +599,55 @@ public Void run() throws Exception { return null; } }); + } + @Test + public void testAttrProviderSeesResolvedSnapahotPaths1() throws Exception { + runPermissionCheck = true; + FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0)); + DistributedFileSystem hdfs = miniDFS.getFileSystem(); + final Path parent = new Path("/user"); + hdfs.mkdirs(parent); + fs.setPermission(parent, new FsPermission(HDFS_PERMISSION)); + final Path sub1 = new Path(parent, "sub1"); + final Path sub1foo = new Path(sub1, "foo"); + hdfs.mkdirs(sub1); + hdfs.mkdirs(sub1foo); + Path f = new Path(sub1foo, "file0"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + hdfs.allowSnapshot(parent); + hdfs.createSnapshot(parent, "s0"); + + f = new Path(sub1foo, "file1"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + f = new Path(sub1foo, "file2"); + DFSTestUtil.createFile(hdfs, f, 0, (short) 1, 0); + + final Path sub2 = new Path(parent, "sub2"); + hdfs.mkdirs(sub2); + final Path sub2foo = new Path(sub2, "foo"); + // mv /parent/sub1/foo to /parent/sub2/foo + hdfs.rename(sub1foo, sub2foo); + + hdfs.createSnapshot(parent, "s1"); + hdfs.createSnapshot(parent, "s2"); + + final Path sub3 = new Path(parent, "sub3"); + hdfs.mkdirs(sub3); + // mv /parent/sub2/foo to /parent/sub3/foo + hdfs.rename(sub2foo, sub3); + + hdfs.delete(sub3, true); + UserGroupInformation ugi = + UserGroupInformation.createUserForTesting("u1", new String[] { "g1" }); + ugi.doAs(new PrivilegedExceptionAction() { + @Override + public Void run() throws Exception { + FileSystem fs = FileSystem.get(miniDFS.getConfiguration(0)); + ((DistributedFileSystem)fs).getSnapshotDiffReport(parent, "s1", "s2"); + CALLED.clear(); + return null; + } + }); } } {code} It fails with the below error when executed: {code:java} org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute path required, but got 'foo'org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute path required, but got 'foo' at org.apache.hadoop.hdfs.server.namenode.INode.checkAbsolutePath(INode.java:838) at org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:813) at org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolveFromRoot(INodesInPath.java:154) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:447) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSubAccess(FSPermissionChecker.java:507) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:403) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:417) at
[jira] [Updated] (HDFS-16121) Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames
[ https://issues.apache.org/jira/browse/HDFS-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-16121: --- Summary: Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames (was: Iterative snapshot diff report can generate duplicate records for creates and deletes) > Iterative snapshot diff report can generate duplicate records for creates, > deletes and Renames > -- > > Key: HDFS-16121 > URL: https://issues.apache.org/jira/browse/HDFS-16121 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Srinivasu Majeti >Assignee: Shashikant Banerjee >Priority: Major > > Currently, iterative snapshot diff report first traverses the created list > for a directory diff and then the deleted list. If the deleted list size is > lesser than the created list size, the offset calculation in the respective > list seems wrong. So the next iteration of diff report generation call, it > will start iterating the already processed in the created list leading to > duplicate entries in the list. > Fix is to correct the offset calculation during the traversal of the deleted > list. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16121) Iterative snapshot diff report can generate duplicate records for creates and deletes
Shashikant Banerjee created HDFS-16121: -- Summary: Iterative snapshot diff report can generate duplicate records for creates and deletes Key: HDFS-16121 URL: https://issues.apache.org/jira/browse/HDFS-16121 Project: Hadoop HDFS Issue Type: Bug Components: snapshots Reporter: Srinivasu Majeti Assignee: Shashikant Banerjee Currently, iterative snapshot diff report first traverses the created list for a directory diff and then the deleted list. If the deleted list size is lesser than the created list size, the offset calculation in the respective list seems wrong. So the next iteration of diff report generation call, it will start iterating the already processed in the created list leading to duplicate entries in the list. Fix is to correct the offset calculation during the traversal of the deleted list. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15961. Resolution: Fixed > standby namenode failed to start ordered snapshot deletion is enabled while > having snapshottable directories > > > Key: HDFS-15961 > URL: https://issues.apache.org/jira/browse/HDFS-15961 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > {code:java} > 2021-04-08 12:07:25,398 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new > storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 > 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: Could not provision Trash directory for existing snapshottable > directories. Exiting Namenode. > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: > Signalling async audit cleanup to start. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15961: --- Reporter: Nilotpal Nandi (was: Shashikant Banerjee) > standby namenode failed to start ordered snapshot deletion is enabled while > having snapshottable directories > > > Key: HDFS-15961 > URL: https://issues.apache.org/jira/browse/HDFS-15961 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > {code:java} > 2021-04-08 12:07:25,398 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new > storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 > 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: Could not provision Trash directory for existing snapshottable > directories. Exiting Namenode. > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: > Signalling async audit cleanup to start. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15614) Initialize snapshot trash root during NameNode startup if enabled
[ https://issues.apache.org/jira/browse/HDFS-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320719#comment-17320719 ] Shashikant Banerjee commented on HDFS-15614: [~ayushtkn], i just tried making a snapshottable directory and it seems the .Trash is implicitly created once the config "dfs.namenode.snapshot.trashroot.enabled" is set to true. {code:java} hdfs dfsadmin -fs hdfs://127.0.0.1: -allowsnapshot / hdfs dfs -ls hdfs://127.0.0.1:/ Found 2 items drwxrwxrwt - shashikant supergroup 0 2021-04-12 11:20 hdfs://127.0.0.1:/.Trash drwxr-xr-x - shashikant supergroup 0 2021-04-12 11:19 hdfs://127.0.0.1:/dir1{code} [~smeng], can you please confirm? > Initialize snapshot trash root during NameNode startup if enabled > - > > Key: HDFS-15614 > URL: https://issues.apache.org/jira/browse/HDFS-15614 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > This is a follow-up to HDFS-15607. > Goal: > Initialize (create) snapshot trash root for all existing snapshottable > directories if {{dfs.namenode.snapshot.trashroot.enabled}} is set to > {{true}}. So admins won't have to run {{dfsadmin -provisionTrash}} manually > on all those existing snapshottable directories. > The change is expected to land in {{FSNamesystem}}. > Discussion: > 1. Currently in HDFS-15607, the snapshot trash root creation logic is on the > client side. But in order for NN to create it at startup, the logic must > (also) be implemented on the server side as well. -- which is also a > requirement by WebHDFS (HDFS-15612). > 2. Alternatively, we can provide an extra parameter to the > {{-provisionTrash}} command like: {{dfsadmin -provisionTrash -all}} to > initialize/provision trash root on all existing snapshottable dirs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15614) Initialize snapshot trash root during NameNode startup if enabled
[ https://issues.apache.org/jira/browse/HDFS-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319168#comment-17319168 ] Shashikant Banerjee edited comment on HDFS-15614 at 4/12/21, 8:46 AM: -- Thanks [~ayushtkn]. The "getAllSnapshottableDirs()" in itslef is not a heavy call IMO. It does not depend on the no of snapshots present in the system. {code:java} 1. What if the mkdirs fail? the namenode will crash, ultimately all Namenodes will try this stuff in an attempt to become active and come out of safemode. Hence all the namenodes will crash. Why mkdirs can fail, could be many reasons, I can tell you one which I tried: Namespace Quotas, and yep the namenode crashed. can be bunch of such cases {code} If mkdir fails to create the Trash directory , inside the snapshot root, then strict ordering/processing of all entries during snapshot deletion can not be guaranteed, If this feature needs to be used, .Trash needs to be within the snapshottable directory which is similar to the case with encryption zones. {code:java} 2. Secondly, An ambiguity, A client did an allowSnapshot say not from HdfsAdmin he didn't had any Trash directory in the snapshot dir, Suddenly a failover happened, he would get a trash directory in its snapshot directory, Which he never created.{code} If a new directory is made snapshottable with feature flahg turned , .Trash directory gets created impliclitly as a part of allowSnapshot call. I don't think there is an ambiguity here. {code:java} Third, The time cost, The namenode startup or the namenode failover or let it be coming out of safemode should be fast, They are actually contributing to cluster down time, and here we are doing like first getSnapshottableDirs which itself would be a heavy call if you have a lot of snapshots, then for each directory, one by one we are doing a getFileInfo and then a mkdir, seems like time-consuming. Not sure about the memory consumption at that point due to this though... {code} I don't think getSnapshottableDirs() is a very heavey call in typical setups. It has nothing to do with the no of snapshots that exist in the sytem. {code:java} Fourth, Why the namenode needs to do a client operation? It is the server. And that too while starting up, This mkdirs from namenode to self is itself suspicious, Bunch of namenode crashing coming up trying to become active, trying to push same edits, Hopefully you would have taken that into account and pretty sure such things won't occur, Namenodes won't collide even in the rarest cases. yep and all safe with the permissions.. {code} This is important for provisioning snapshot trash to use ordered snapshot deletion feature if the system already had pre existing snapshottable directories. was (Author: shashikant): Thanks [~ayushtkn]. The "getAllSnapshottableDirs()" in itslef is not a heavy call IMO. It does not depend on the no of snapshots present in the system. {code:java} 1. What if the mkdirs fail? the namenode will crash, ultimately all Namenodes will try this stuff in an attempt to become active and come out of safemode. Hence all the namenodes will crash. Why mkdirs can fail, could be many reasons, I can tell you one which I tried: Namespace Quotas, and yep the namenode crashed. can be bunch of such cases {code} If mkdir fails to create the Trash directory , inside the snapshot root, then strict ordering/processing of all entries during snapshot deletion can not be guaranteed, If this feature needs to be used, .Trash needs to be within the snapshottable directory which is similar to the case with encryption zones. {code:java} 2. Secondly, An ambiguity, A client did an allowSnapshot say not from HdfsAdmin he didn't had any Trash directory in the snapshot dir, Suddenly a failover happened, he would get a trash directory in its snapshot directory, Which he never created.{code} If a new directory is made snapshottable with feature flahg turned , .Trash directory gets created impliclitly as a part of allowSnapshot call. I don't think there is an ambiguity here. {code:java} Third, The time cost, The namenode startup or the namenode failover or let it be coming out of safemode should be fast, They are actually contributing to cluster down time, and here we are doing like first getSnapshottableDirs which itself would be a heavy call if you have a lot of snapshots, then for each directory, one by one we are doing a getFileInfo and then a mkdir, seems like time-consuming. Not sure about the memory consumption at that point due to this though... {code} I don't think getSnapshottableDirs() is a very heavey call in typical setups. It has nothing to do with the no of snapshots that exist in the sytem. {code:java} Fourth, Why the namenode needs to do a client operation? It is the server. And that too while starting up, This mkdirs from namenode to
[jira] [Commented] (HDFS-15614) Initialize snapshot trash root during NameNode startup if enabled
[ https://issues.apache.org/jira/browse/HDFS-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319168#comment-17319168 ] Shashikant Banerjee commented on HDFS-15614: Thanks [~ayushtkn]. The "getAllSnapshottableDirs()" in itslef is not a heavy call IMO. It does not depend on the no of snapshots present in the system. {code:java} 1. What if the mkdirs fail? the namenode will crash, ultimately all Namenodes will try this stuff in an attempt to become active and come out of safemode. Hence all the namenodes will crash. Why mkdirs can fail, could be many reasons, I can tell you one which I tried: Namespace Quotas, and yep the namenode crashed. can be bunch of such cases {code} If mkdir fails to create the Trash directory , inside the snapshot root, then strict ordering/processing of all entries during snapshot deletion can not be guaranteed, If this feature needs to be used, .Trash needs to be within the snapshottable directory which is similar to the case with encryption zones. {code:java} 2. Secondly, An ambiguity, A client did an allowSnapshot say not from HdfsAdmin he didn't had any Trash directory in the snapshot dir, Suddenly a failover happened, he would get a trash directory in its snapshot directory, Which he never created.{code} If a new directory is made snapshottable with feature flahg turned , .Trash directory gets created impliclitly as a part of allowSnapshot call. I don't think there is an ambiguity here. {code:java} Third, The time cost, The namenode startup or the namenode failover or let it be coming out of safemode should be fast, They are actually contributing to cluster down time, and here we are doing like first getSnapshottableDirs which itself would be a heavy call if you have a lot of snapshots, then for each directory, one by one we are doing a getFileInfo and then a mkdir, seems like time-consuming. Not sure about the memory consumption at that point due to this though... {code} I don't think getSnapshottableDirs() is a very heavey call in typical setups. It has nothing to do with the no of snapshots that exist in the sytem. {code:java} Fourth, Why the namenode needs to do a client operation? It is the server. And that too while starting up, This mkdirs from namenode to self is itself suspicious, Bunch of namenode crashing coming up trying to become active, trying to push same edits, Hopefully you would have taken that into account and pretty sure such things won't occur, Namenodes won't collide even in the rarest cases. yep and all safe with the permissions.. {code} This is important for provisioning snapshot trash to use ordered snapshot deletion feature if the system already had pre existing snapshottable directories. > Initialize snapshot trash root during NameNode startup if enabled > - > > Key: HDFS-15614 > URL: https://issues.apache.org/jira/browse/HDFS-15614 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > This is a follow-up to HDFS-15607. > Goal: > Initialize (create) snapshot trash root for all existing snapshottable > directories if {{dfs.namenode.snapshot.trashroot.enabled}} is set to > {{true}}. So admins won't have to run {{dfsadmin -provisionTrash}} manually > on all those existing snapshottable directories. > The change is expected to land in {{FSNamesystem}}. > Discussion: > 1. Currently in HDFS-15607, the snapshot trash root creation logic is on the > client side. But in order for NN to create it at startup, the logic must > (also) be implemented on the server side as well. -- which is also a > requirement by WebHDFS (HDFS-15612). > 2. Alternatively, we can provide an extra parameter to the > {{-provisionTrash}} command like: {{dfsadmin -provisionTrash -all}} to > initialize/provision trash root on all existing snapshottable dirs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
Shashikant Banerjee created HDFS-15961: -- Summary: standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories Key: HDFS-15961 URL: https://issues.apache.org/jira/browse/HDFS-15961 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Affects Versions: 3.4.0 Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Fix For: 3.4.0 {code:java} 2021-04-08 12:07:25,398 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: Could not provision Trash directory for existing snapshottable directories. Exiting Namenode. 2021-04-08 12:07:55,596 INFO org.apache.ranger.audit.provider.AuditProviderFactory: ==> JVMShutdownHook.run() 2021-04-08 12:07:55,596 INFO org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: Signalling async audit cleanup to start. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15817) Rename snapshots while marking them deleted
[ https://issues.apache.org/jira/browse/HDFS-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15817. Fix Version/s: 3.4.0 Resolution: Fixed > Rename snapshots while marking them deleted > > > Key: HDFS-15817 > URL: https://issues.apache.org/jira/browse/HDFS-15817 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > With ordered snapshot feature turned on, a snapshot will be just marked as > deleted but won't actually be deleted if its not the oldest one. Since, the > snapshot is just marked deleted, creation of new snapshot having the same > name as the one which was marked deleted will fail. In order to mitigate such > problems, the idea here is to rename the snapshot getting marked as deleted > by appending deletion timestamp along with snapshot id to it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit
[ https://issues.apache.org/jira/browse/HDFS-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278911#comment-17278911 ] Shashikant Banerjee commented on HDFS-15820: [~smeng], can you help review this? > Ensure snapshot root trash provisioning happens only post safe mode exit > > > Key: HDFS-15820 > URL: https://issues.apache.org/jira/browse/HDFS-15820 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, on namenode startup, snapshot trash root provisioning starts as > along with trash emptier service but namenode might not be out of safe mode > by then. This can fail the snapshot trash dir creation thereby crashing the > namenode. The idea here is to trigger snapshot trash provisioning only post > safe mode exit. > {code:java} > 2021-02-04 11:23:47,323 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring > NN shutdown. Shutting down immediately. > org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create > directory /upgrade/.Trash. Name node is in safe mode. > The reported blocks 0 needs additional 1383 blocks to reach the threshold > 0.9990 of total blocks 1385. > The number of live datanodes 0 needs an additional 1 live datanodes to reach > the minimum number 1. > Safe mode will be turned off automatically once the thresholds have been > reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939) > at > org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:967) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:936) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1740) > 2021-02-04 11:23:47,334 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot > create directory /upgrade/.Trash. Name node is in safe mode. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15817) Rename snapshots while marking them deleted
[ https://issues.apache.org/jira/browse/HDFS-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278909#comment-17278909 ] Shashikant Banerjee commented on HDFS-15817: [~szetszwo], can you please help review this? > Rename snapshots while marking them deleted > > > Key: HDFS-15817 > URL: https://issues.apache.org/jira/browse/HDFS-15817 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > With ordered snapshot feature turned on, a snapshot will be just marked as > deleted but won't actually be deleted if its not the oldest one. Since, the > snapshot is just marked deleted, creation of new snapshot having the same > name as the one which was marked deleted will fail. In order to mitigate such > problems, the idea here is to rename the snapshot getting marked as deleted > by appending deletion timestamp along with snapshot id to it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit
[ https://issues.apache.org/jira/browse/HDFS-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15820: --- Description: Currently, on namenode startup, snapshot trash root provisioning starts as along with trash emptier service but namenode might not be out of safe mode by then. This can fail the snapshot trash dir creation thereby crashing the namenode. The idea here is to trigger snapshot trash provisioning only post safe mode exit. {code:java} 2021-02-04 11:23:47,323 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring NN shutdown. Shutting down immediately. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /upgrade/.Trash. Name node is in safe mode. The reported blocks 0 needs additional 1383 blocks to reach the threshold 0.9990 of total blocks 1385. The number of live datanodes 0 needs an additional 1 live datanodes to reach the minimum number 1. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939) at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:967) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:936) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1740) 2021-02-04 11:23:47,334 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /upgrade/.Trash. Name node is in safe mode. {code} was:Currently, on namenode startup, snapshot trash root provisioning starts as along with trash emptier service but namenode might not be out of safe mode by then. This can fail the snapshot trash dir creation thereby crashing the namenode. The idea here is to trigger snapshot trash provisioning only post safe mode exit. > Ensure snapshot root trash provisioning happens only post safe mode exit > > > Key: HDFS-15820 > URL: https://issues.apache.org/jira/browse/HDFS-15820 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, on namenode startup, snapshot trash root provisioning starts as > along with trash emptier service but namenode might not be out of safe mode > by then. This can fail the snapshot trash dir creation thereby crashing the > namenode. The idea here is to trigger snapshot trash provisioning only post > safe mode exit. > {code:java} > 2021-02-04 11:23:47,323 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring > NN shutdown. Shutting down immediately. > org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create > directory /upgrade/.Trash. Name node is in safe mode. > The reported blocks 0 needs additional 1383 blocks to reach the threshold > 0.9990 of total blocks 1385. > The number of live datanodes 0 needs an additional 1 live datanodes to reach > the minimum number 1. > Safe mode will be turned off automatically once the thresholds have been > reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939) > at > org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) > at >
[jira] [Created] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit
Shashikant Banerjee created HDFS-15820: -- Summary: Ensure snapshot root trash provisioning happens only post safe mode exit Key: HDFS-15820 URL: https://issues.apache.org/jira/browse/HDFS-15820 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Currently, on namenode startup, snapshot trash root provisioning starts as along with trash emptier service but namenode might not be out of safe mode by then. This can fail the snapshot trash dir creation thereby crashing the namenode. The idea here is to trigger snapshot trash provisioning only post safe mode exit. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15817) Rename snapshots while marking them deleted
Shashikant Banerjee created HDFS-15817: -- Summary: Rename snapshots while marking them deleted Key: HDFS-15817 URL: https://issues.apache.org/jira/browse/HDFS-15817 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Shashikant Banerjee With ordered snapshot feature turned on, a snapshot will be just marked as deleted but won't actually be deleted if its not the oldest one. Since, the snapshot is just marked deleted, creation of new snapshot having the same name as the one which was marked deleted will fail. In order to mitigate such problems, the idea here is to rename the snapshot getting marked as deleted by appending deletion timestamp along with snapshot id to it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15817) Rename snapshots while marking them deleted
[ https://issues.apache.org/jira/browse/HDFS-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15817: -- Assignee: Shashikant Banerjee > Rename snapshots while marking them deleted > > > Key: HDFS-15817 > URL: https://issues.apache.org/jira/browse/HDFS-15817 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > With ordered snapshot feature turned on, a snapshot will be just marked as > deleted but won't actually be deleted if its not the oldest one. Since, the > snapshot is just marked deleted, creation of new snapshot having the same > name as the one which was marked deleted will fail. In order to mitigate such > problems, the idea here is to rename the snapshot getting marked as deleted > by appending deletion timestamp along with snapshot id to it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15619) Metric for ordered snapshot deletion GC thread
[ https://issues.apache.org/jira/browse/HDFS-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264082#comment-17264082 ] Shashikant Banerjee commented on HDFS-15619: We can should also add metrics related to quota usage of deleted snapshots both in terms of namespace as well as disk space. > Metric for ordered snapshot deletion GC thread > -- > > Key: HDFS-15619 > URL: https://issues.apache.org/jira/browse/HDFS-15619 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Following info should be captured and shown in JMX for garbage collection > thread of ordered snapshot deletion > * metric for all pending snapshots to be GCed > * Number of times GC thread ran > * Number of Snapshots already GCed > * Average time taken by each GC run > * Thread running Status > * metric for failed deletion of GC thread -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15687) allowSnapshot fails when directory already has a Trash sub directory
[ https://issues.apache.org/jira/browse/HDFS-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15687. Fix Version/s: 3.4.0 Resolution: Duplicate > allowSnapshot fails when directory already has a Trash sub directory > > > Key: HDFS-15687 > URL: https://issues.apache.org/jira/browse/HDFS-15687 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > Steps : > 1. Create an encryption zone , Trash directory would be created inside EZ > directory. > /opt/cloudera/parcels/CDH/bin/hdfs crypto -createZone -keyName > testkeysnapshot1605613314 -path > /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 > 2. Try to make the EZ directory snapshottable. > /opt/cloudera/parcels/CDH/bin/hdfs dfsadmin -allowSnapshot > /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 > It fails with error : > {noformat} > /opt/cloudera/parcels/CDH/bin/hdfs dfsadmin -allowSnapshot > /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 > 2020-11-17 11:45:16,598|INFO|MainThread|machine.py:180 - > run()||GUID=b35fc918-ed08-4c5d-92c1-c5aab449fb10|allowSnapshot: Can't > provision trash for snapshottable directory > /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 because trash path > /user/hrt_6/test_dir1/snap_encrypt_dir1605613504/.Trash already exists. > 2020-11-17 11:45:16,956|INFO|MainThread|machine.py:209 - > run()||GUID=b35fc918-ed08-4c5d-92c1-c5aab449fb10|Exit Code: 255{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check
[ https://issues.apache.org/jira/browse/HDFS-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15689: --- Parent: HDFS-15477 Issue Type: Sub-task (was: Bug) > allow/disallowSnapshot on EZ roots shouldn't fail due to trash > provisioning/emptiness check > --- > > Key: HDFS-15689 > URL: https://issues.apache.org/jira/browse/HDFS-15689 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.4.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > h2. Background > 1. HDFS-15607 added a feature that when > {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will > automatically create a .Trash directory immediately after allowSnapshot > operation so files deleted will be moved into the trash root inside the > snapshottable directory. > 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root > inside is not empty > h2. Problem > 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the > directory (to be allowed snapshot on) is an EZ root, it throws > {{FileAlreadyExistsException}} because the trash root already exists > (encryption zone has already created an internal trash root). > 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may > complain that the trash root is not empty (or delete it if empty, which is > not desired since EZ will still need it). > h2. Solution > 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, > but informs the admin that the trash already exists. > 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15687) allowSnapshot fails when directory already has a Trash sub directory
Shashikant Banerjee created HDFS-15687: -- Summary: allowSnapshot fails when directory already has a Trash sub directory Key: HDFS-15687 URL: https://issues.apache.org/jira/browse/HDFS-15687 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Nilotpal Nandi Assignee: Shashikant Banerjee Steps : 1. Create an encryption zone , Trash directory would be created inside EZ directory. /opt/cloudera/parcels/CDH/bin/hdfs crypto -createZone -keyName testkeysnapshot1605613314 -path /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 2. Try to make the EZ directory snapshottable. /opt/cloudera/parcels/CDH/bin/hdfs dfsadmin -allowSnapshot /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 It fails with error : {noformat} /opt/cloudera/parcels/CDH/bin/hdfs dfsadmin -allowSnapshot /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 2020-11-17 11:45:16,598|INFO|MainThread|machine.py:180 - run()||GUID=b35fc918-ed08-4c5d-92c1-c5aab449fb10|allowSnapshot: Can't provision trash for snapshottable directory /user/hrt_6/test_dir1/snap_encrypt_dir1605613504 because trash path /user/hrt_6/test_dir1/snap_encrypt_dir1605613504/.Trash already exists. 2020-11-17 11:45:16,956|INFO|MainThread|machine.py:209 - run()||GUID=b35fc918-ed08-4c5d-92c1-c5aab449fb10|Exit Code: 255{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15619) Metric for ordered snapshot deletion GC thread
[ https://issues.apache.org/jira/browse/HDFS-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15619: --- Parent: HDFS-15477 Issue Type: Sub-task (was: Task) > Metric for ordered snapshot deletion GC thread > -- > > Key: HDFS-15619 > URL: https://issues.apache.org/jira/browse/HDFS-15619 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Major > > Following info should be captured and shown in JMX for garbage collection > thread of ordered snapshot deletion > * metric for all pending snapshots to be GCed > * Number of times GC thread ran > * Number of Snapshots already GCed > * Average time taken by each GC run > * Thread running Status > * metric for failed deletion of GC thread -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15611) Add list Snapshot command in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15611: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Add list Snapshot command in WebHDFS > > > Key: HDFS-15611 > URL: https://issues.apache.org/jira/browse/HDFS-15611 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Idea here is to expose lsSnapshot cmd over WebHdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15611) Add list Snapshot command in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15611: -- Assignee: Shashikant Banerjee > Add list Snapshot command in WebHDFS > > > Key: HDFS-15611 > URL: https://issues.apache.org/jira/browse/HDFS-15611 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > Idea here is to expose lsSnapshot cmd over WebHdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15611) Add list Snapshot command in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15611: --- Summary: Add list Snapshot command in WebHDFS (was: Add lsSnapshot command in WebHDFS) > Add list Snapshot command in WebHDFS > > > Key: HDFS-15611 > URL: https://issues.apache.org/jira/browse/HDFS-15611 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > Idea here is to expose lsSnapshot cmd over WebHdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15611) Add lsSnapshot command in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15611: --- Summary: Add lsSnapshot command in WebHDFS (was: Add lsSnapshot command in HDFS) > Add lsSnapshot command in WebHDFS > - > > Key: HDFS-15611 > URL: https://issues.apache.org/jira/browse/HDFS-15611 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > Idea here is to expose lsSnapshot cmd over WebHdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15611) Add lsSnapshot command in HDFS
Shashikant Banerjee created HDFS-15611: -- Summary: Add lsSnapshot command in HDFS Key: HDFS-15611 URL: https://issues.apache.org/jira/browse/HDFS-15611 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Fix For: 3.4.0 Idea here is to expose lsSnapshot cmd over WebHdfs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15595. Fix Version/s: 3.4.0 Resolution: Fixed > TestSnapshotCommands.testMaxSnapshotLimit fails in trunk > > > Key: HDFS-15595 > URL: https://issues.apache.org/jira/browse/HDFS-15595 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, snapshots, test >Reporter: Mingliang Liu >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > See > [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/] > for a sample error. > Sample error stack: > {quote} > Error Message > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > Stacktrace > java.lang.AssertionError: > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934) > at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942) > at > org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {quote} > I can also reproduce this locally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled
[ https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15590. Resolution: Fixed > namenode fails to start when ordered snapshot deletion feature is disabled > -- > > Key: HDFS-15590 > URL: https://issues.apache.org/jira/browse/HDFS-15590 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > {code:java} > 1. Enabled ordered deletion snapshot feature. > 2. Created snapshottable directory - /user/hrt_6/atrr_dir1 > 3. Created snapshots s0, s1, s2. > 4. Deleted snapshot s2 > 5. Delete snapshot s0, s1, s2 again > 6. Disable ordered deletion snapshot feature > 5. Restart Namenode > Failed to start namenode. > org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 > from path /user/hrt_6/atrr_dir2: the snapshot does not exist. > at > org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237) > at > org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201254#comment-17201254 ] Shashikant Banerjee commented on HDFS-15595: Thanks [~liuml07] for filing the issue. The test failure will be addressed with https://issues.apache.org/jira/browse/HDFS-15590. > TestSnapshotCommands.testMaxSnapshotLimit fails in trunk > > > Key: HDFS-15595 > URL: https://issues.apache.org/jira/browse/HDFS-15595 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, snapshots, test >Reporter: Mingliang Liu >Assignee: Shashikant Banerjee >Priority: Major > > See > [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/] > for a sample error. > Sample error stack: > {quote} > Error Message > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > Stacktrace > java.lang.AssertionError: > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934) > at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942) > at > org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {quote} > I can also reproduce this locally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15595: -- Assignee: Shashikant Banerjee > TestSnapshotCommands.testMaxSnapshotLimit fails in trunk > > > Key: HDFS-15595 > URL: https://issues.apache.org/jira/browse/HDFS-15595 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, snapshots, test >Reporter: Mingliang Liu >Assignee: Shashikant Banerjee >Priority: Major > > See > [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/] > for a sample error. > Sample error stack: > {quote} > Error Message > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > Stacktrace > java.lang.AssertionError: > The real output is: createSnapshot: Failed to create snapshot: there are > already 4 snapshot(s) and the per directory snapshot limit is 3 > . > It should contain: Failed to add snapshot: there are already 3 snapshot(s) > and the max snapshot limit is 3 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934) > at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942) > at > org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {quote} > I can also reproduce this locally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled
[ https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15590: --- Reporter: Nilotpal Nandi (was: Shashikant Banerjee) > namenode fails to start when ordered snapshot deletion feature is disabled > -- > > Key: HDFS-15590 > URL: https://issues.apache.org/jira/browse/HDFS-15590 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > 1. Enabled ordered deletion snapshot feature. > 2. Created snapshottable directory - /user/hrt_6/atrr_dir1 > 3. Created snapshots s0, s1, s2. > 4. Deleted snapshot s2 > 5. Delete snapshot s0, s1, s2 again > 6. Disable ordered deletion snapshot feature > 5. Restart Namenode > Failed to start namenode. > org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 > from path /user/hrt_6/atrr_dir2: the snapshot does not exist. > at > org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237) > at > org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled
[ https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15590: -- Assignee: Shashikant Banerjee > namenode fails to start when ordered snapshot deletion feature is disabled > -- > > Key: HDFS-15590 > URL: https://issues.apache.org/jira/browse/HDFS-15590 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > {code:java} > 1. Enabled ordered deletion snapshot feature. > 2. Created snapshottable directory - /user/hrt_6/atrr_dir1 > 3. Created snapshots s0, s1, s2. > 4. Deleted snapshot s2 > 5. Delete snapshot s0, s1, s2 again > 6. Disable ordered deletion snapshot feature > 5. Restart Namenode > Failed to start namenode. > org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 > from path /user/hrt_6/atrr_dir2: the snapshot does not exist. > at > org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237) > at > org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled
Shashikant Banerjee created HDFS-15590: -- Summary: namenode fails to start when ordered snapshot deletion feature is disabled Key: HDFS-15590 URL: https://issues.apache.org/jira/browse/HDFS-15590 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Fix For: 3.4.0 {code:java} 1. Enabled ordered deletion snapshot feature. 2. Created snapshottable directory - /user/hrt_6/atrr_dir1 3. Created snapshots s0, s1, s2. 4. Deleted snapshot s2 5. Delete snapshot s0, s1, s2 again 6. Disable ordered deletion snapshot feature 5. Restart Namenode Failed to start namenode. org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 from path /user/hrt_6/atrr_dir2: the snapshot does not exist. at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237) at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15568) namenode start failed to start when dfs.namenode.snapshot.max.limit set
[ https://issues.apache.org/jira/browse/HDFS-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15568. Fix Version/s: 3.4.0 Resolution: Fixed > namenode start failed to start when dfs.namenode.snapshot.max.limit set > --- > > Key: HDFS-15568 > URL: https://issues.apache.org/jira/browse/HDFS-15568 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > {code:java} > 11:35:05.872 AM ERROR NameNode > Failed to start namenode. > org.apache.hadoop.hdfs.protocol.SnapshotException: Failed to add snapshot: > there are already 20 snapshot(s) and the max snapshot limit is 20 > at > org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.addSnapshot(DirectorySnapshottableFeature.java:181) > at > org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addSnapshot(INodeDirectory.java:285) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:447) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:802) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) > {code} > Steps to reproduce: > -- > directory level snapshot limit set - 100 > Created 100 snapshots > deleted all 100 snapshots (in-oder) > No snapshot exist > Then, directory level snapshot limit set - 20 > HDFS restart > Namenode start failed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir
[ https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15563: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Incorrect getTrashRoot return value when a non-snapshottable dir prefix > matches the path of a snapshottable dir > --- > > Key: HDFS-15563 > URL: https://issues.apache.org/jira/browse/HDFS-15563 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is > enabled. > Root cause analysis: > {{SnapshottableDirectoryStatus}} paths retrived inside > {{DFSClient#getSnapshotRoot}} aren't appended with '/', causing some > directories with the same path prefix to be mistakenly classified as > snapshottable directory. > Thanks [~shashikant] for the test case addition. > --- > Repro: > {code:java} > 1. snapshottable directory present in the cluster > hdfs lsSnapshottableDir > drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2 > drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 > /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable > directory > hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it > failed > hdfs dfs -rm -r /user/hrt_4/newdir/subdir2 > rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source > /user/hrt_4/newdir/subdir2 and dest > /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are > not under the same snapshot root. > {code} > "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be > "*/user/hrt_4/newdir/subdir/.Trash*" > as clear from the msg here: > {noformat} > rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source > /user/hrt_4/newdir/subdir2 and dest > /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are > not under the same snapshot root.{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15568) namenode start failed to start when dfs.namenode.snapshot.max.limit set
Shashikant Banerjee created HDFS-15568: -- Summary: namenode start failed to start when dfs.namenode.snapshot.max.limit set Key: HDFS-15568 URL: https://issues.apache.org/jira/browse/HDFS-15568 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Nilotpal Nandi Assignee: Shashikant Banerjee {code:java} 11:35:05.872 AM ERROR NameNode Failed to start namenode. org.apache.hadoop.hdfs.protocol.SnapshotException: Failed to add snapshot: there are already 20 snapshot(s) and the max snapshot limit is 20 at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.addSnapshot(DirectorySnapshottableFeature.java:181) at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addSnapshot(INodeDirectory.java:285) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:447) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:802) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737) {code} Steps to reproduce: -- directory level snapshot limit set - 100 Created 100 snapshots deleted all 100 snapshots (in-oder) No snapshot exist Then, directory level snapshot limit set - 20 HDFS restart Namenode start failed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15563) getTrashRoot() location can be wrong when the non-snapshottable directory contains the name of the snapshottable directory in its name
[ https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15563: -- Assignee: Siyao Meng > getTrashRoot() location can be wrong when the non-snapshottable directory > contains the name of the snapshottable directory in its name > --- > > Key: HDFS-15563 > URL: https://issues.apache.org/jira/browse/HDFS-15563 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Siyao Meng >Priority: Major > Fix For: 3.4.0 > > > {code:java} > 1. snapshottable directory present in the cluster > hdfs lsSnapshottableDir > drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2 > drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 > /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable > directory > hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it > failed > hdfs dfs -rm -r /user/hrt_4/newdir/subdir2 > rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source > /user/hrt_4/newdir/subdir2 and dest > /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are > not under the same snapshot root. > {code} > "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be > "*/user/hrt_4/newdir/subdir/.Trash*" > as clear from the msg here: > {noformat} > rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source > /user/hrt_4/newdir/subdir2 and dest > /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are > not under the same snapshot root.{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15563) getTrashRoot() location can be wrong when the non-snapshottable directory contains the name of the snapshottable directory in its name
Shashikant Banerjee created HDFS-15563: -- Summary: getTrashRoot() location can be wrong when the non-snapshottable directory contains the name of the snapshottable directory in its name Key: HDFS-15563 URL: https://issues.apache.org/jira/browse/HDFS-15563 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Affects Versions: 3.4.0 Reporter: Nilotpal Nandi Fix For: 3.4.0 {code:java} 1. snapshottable directory present in the cluster hdfs lsSnapshottableDir drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2 drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable directory hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed hdfs dfs -rm -r /user/hrt_4/newdir/subdir2 rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source /user/hrt_4/newdir/subdir2 and dest /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not under the same snapshot root. {code} "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be "*/user/hrt_4/newdir/subdir/.Trash*" as clear from the msg here: {noformat} rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source /user/hrt_4/newdir/subdir2 and dest /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not under the same snapshot root.{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15542) Add identified snapshot corruption tests for ordered snapshot deletion
[ https://issues.apache.org/jira/browse/HDFS-15542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15542. Fix Version/s: 3.4.0 Resolution: Fixed > Add identified snapshot corruption tests for ordered snapshot deletion > -- > > Key: HDFS-15542 > URL: https://issues.apache.org/jira/browse/HDFS-15542 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > > HDFS-13101, HDFS-15012 and HDFS-15313 along with HDFS-15470 have fsimage > corruption sequences with snapshots . The idea here is to aggregate these > unit tests and enabled them for ordered snapshot deletion feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15500) In-order deletion of snapshots: Diff lists must be update only in the last snapshot
[ https://issues.apache.org/jira/browse/HDFS-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15500. Fix Version/s: 3.4.0 Resolution: Fixed Thanks [~szetszwo] for the contribution. > In-order deletion of snapshots: Diff lists must be update only in the last > snapshot > --- > > Key: HDFS-15500 > URL: https://issues.apache.org/jira/browse/HDFS-15500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Mukul Kumar Singh >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > With ordered deletions the diff lists of the snapshots should become > immutable except the latest one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15541) Disallow making a Snapshottable directory unsnapshottable if it has no empty snapshot trash directory
[ https://issues.apache.org/jira/browse/HDFS-15541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15541. Fix Version/s: 3.4.0 Resolution: Duplicate > Disallow making a Snapshottable directory unsnapshottable if it has no empty > snapshot trash directory > - > > Key: HDFS-15541 > URL: https://issues.apache.org/jira/browse/HDFS-15541 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Siyao Meng >Priority: Major > Fix For: 3.4.0 > > > If the snapshot trash is enabled, a snapshottable directory should be > disallowed to be marked unsnapshottable if it has non-empty snapshot trash > directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15542) Add identified snapshot corruption tests for ordered snapshot deletion
Shashikant Banerjee created HDFS-15542: -- Summary: Add identified snapshot corruption tests for ordered snapshot deletion Key: HDFS-15542 URL: https://issues.apache.org/jira/browse/HDFS-15542 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee HDFS-13101, HDFS-15012 and HDFS-15313 along with HDFS-15470 have fsimage corruption sequences with snapshots . The idea here is to aggregate these unit tests and enabled them for ordered snapshot deletion feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15541) Disallow making a Snapshottable directory unsnapshottable if it has no empty snapshot trash directory
Shashikant Banerjee created HDFS-15541: -- Summary: Disallow making a Snapshottable directory unsnapshottable if it has no empty snapshot trash directory Key: HDFS-15541 URL: https://issues.apache.org/jira/browse/HDFS-15541 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Assignee: Siyao Meng If the snapshot trash is enabled, a snapshottable directory should be disallowed to be marked unsnapshottable if it has non-empty snapshot trash directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15516: -- Assignee: jianghua zhu (was: Shashikant Banerjee) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Shashikant Banerjee >Assignee: jianghua zhu >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15516: --- Reporter: Shashikant Banerjee (was: jianghua zhu) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15490. Resolution: Won't Do > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15490: --- Status: Open (was: Patch Available) > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15516: --- Reporter: Jin Adachi (was: Shashikant Banerjee) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Jin Adachi >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15516: --- Reporter: jianghua zhu (was: Jin Adachi) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: jianghua zhu >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184930#comment-17184930 ] Shashikant Banerjee commented on HDFS-15516: [~jianghuazhu], please go ahead. > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion
[ https://issues.apache.org/jira/browse/HDFS-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15524: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add edit log entry for Snapshot deletion GC thread snapshot deletion > > > Key: HDFS-15524 > URL: https://issues.apache.org/jira/browse/HDFS-15524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > Currently, Snapshot deletion Gc thread doesn't create an edit log transaction > when the actual snapshot is garbage collected. In cases as such, what might > happen is, if the gc thread deletes snapshots and then namenode is > restarted, snapshots which were garbage collected by the snapshot gc thread > prior restart will reapper till the gc thread again picks them up for garbage > collection as the edits were not captured for actual garbage collection and > at the same time data might have already been deleted from the datanodes > which may lead to too many spurious missing block alerts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15496) Add UI for deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15496: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add UI for deleted snapshots > > > Key: HDFS-15496 > URL: https://issues.apache.org/jira/browse/HDFS-15496 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Assignee: Vivek Ratnavel Subramanian >Priority: Major > Fix For: 3.4.0 > > > Add UI for deleted snapshots > a) Show the list of snapshots per snapshottable directory > b) Add deleted status in the JMX output for the Snapshot along with a snap ID > e) NN UI, should sort the snapshots for snapIds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion
[ https://issues.apache.org/jira/browse/HDFS-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15524: --- Status: Patch Available (was: In Progress) > Add edit log entry for Snapshot deletion GC thread snapshot deletion > > > Key: HDFS-15524 > URL: https://issues.apache.org/jira/browse/HDFS-15524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, Snapshot deletion Gc thread doesn't create an edit log transaction > when the actual snapshot is garbage collected. In cases as such, what might > happen is, if the gc thread deletes snapshots and then namenode is > restarted, snapshots which were garbage collected by the snapshot gc thread > prior restart will reapper till the gc thread again picks them up for garbage > collection as the edits were not captured for actual garbage collection and > at the same time data might have already been deleted from the datanodes > which may lead to too many spurious missing block alerts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion
[ https://issues.apache.org/jira/browse/HDFS-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-15524 started by Shashikant Banerjee. -- > Add edit log entry for Snapshot deletion GC thread snapshot deletion > > > Key: HDFS-15524 > URL: https://issues.apache.org/jira/browse/HDFS-15524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, Snapshot deletion Gc thread doesn't create an edit log transaction > when the actual snapshot is garbage collected. In cases as such, what might > happen is, if the gc thread deletes snapshots and then namenode is > restarted, snapshots which were garbage collected by the snapshot gc thread > prior restart will reapper till the gc thread again picks them up for garbage > collection as the edits were not captured for actual garbage collection and > at the same time data might have already been deleted from the datanodes > which may lead to too many spurious missing block alerts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion
Shashikant Banerjee created HDFS-15524: -- Summary: Add edit log entry for Snapshot deletion GC thread snapshot deletion Key: HDFS-15524 URL: https://issues.apache.org/jira/browse/HDFS-15524 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Currently, Snapshot deletion Gc thread doesn't create an edit log transaction when the actual snapshot is garbage collected. In cases as such, what might happen is, if the gc thread deletes snapshots and then namenode is restarted, snapshots which were garbage collected by the snapshot gc thread prior restart will reapper till the gc thread again picks them up for garbage collection as the edits were not captured for actual garbage collection and at the same time data might have already been deleted from the datanodes which may lead to too many spurious missing block alerts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15518) Wrong operation name in FsNamesystem for listSnapshots
[ https://issues.apache.org/jira/browse/HDFS-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15518: -- Assignee: Aryan Gupta > Wrong operation name in FsNamesystem for listSnapshots > -- > > Key: HDFS-15518 > URL: https://issues.apache.org/jira/browse/HDFS-15518 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Assignee: Aryan Gupta >Priority: Major > > List snapshots makes use of listSnapshotDirectory as the string in place of > ListSnapshot. > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7026 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15520) Use visitor pattern to visit namespace tree
[ https://issues.apache.org/jira/browse/HDFS-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15520. Fix Version/s: 3.4.0 Resolution: Fixed > Use visitor pattern to visit namespace tree > --- > > Key: HDFS-15520 > URL: https://issues.apache.org/jira/browse/HDFS-15520 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > > In order to allow the FsImageValidation tool to verify the namespace > structure, we use a visitor pattern so that the tool can visit all the INodes > and all the snapshots in the namespace tree. > The existing INode.dumpTreeRecursively() can also be implemented by a visitor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15518) Wrong operation name in FsNamesystem for listSnapshots
[ https://issues.apache.org/jira/browse/HDFS-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174405#comment-17174405 ] Shashikant Banerjee commented on HDFS-15518: [~hemanthboyina], yes..it should be "listSnapshots" > Wrong operation name in FsNamesystem for listSnapshots > -- > > Key: HDFS-15518 > URL: https://issues.apache.org/jira/browse/HDFS-15518 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Priority: Major > > List snapshots makes use of listSnapshotDirectory as the string in place of > ListSnapshot. > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7026 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15516: -- Assignee: Shashikant Banerjee > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15516) Add info for create flags in NameNode audit logs
Shashikant Banerjee created HDFS-15516: -- Summary: Add info for create flags in NameNode audit logs Key: HDFS-15516 URL: https://issues.apache.org/jira/browse/HDFS-15516 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Shashikant Banerjee Currently, if file create happens with flags like overwrite , the audit logs doesn't seem to contain the info regarding the flags in the audit logs. It would be useful to add info regarding the create options in the audit logs similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable
[ https://issues.apache.org/jira/browse/HDFS-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15497: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Make snapshot limit on global as well per snapshot root directory configurable > -- > > Key: HDFS-15497 > URL: https://issues.apache.org/jira/browse/HDFS-15497 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15497.000.patch > > > Currently, there is no configurable limit imposed on the no of snapshots > remaining in the system neither on the filesystem level nor on a snaphottable > root directory. Too many snapshots in the system can potentially bloat up the > namespace and with ordered deletion feature on , too many snapshots per > snapshottable root directory will make the deletion of the oldest snapshot > more expensive. This Jira aims to impose these configurable limits . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Status: Patch Available (was: Open) > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15498.000.patch > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add deletion status to cmd output. > > SAMPLE OUTPUT: > {noformat} > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshottableDir > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 2 65536 /user > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshot /user > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 1 ACTIVE > /user/.snapshot/s1 > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:51 0 DELETED > /user/.snapshot/s20200727-115156.407{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15481) Ordered snapshot deletion: garbage collect deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15481: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Ordered snapshot deletion: garbage collect deleted snapshots > > > Key: HDFS-15481 > URL: https://issues.apache.org/jira/browse/HDFS-15481 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15481_20200723.patch, h15481_20200723b.patch > > > When the earliest snapshot is actually deleted, if the subsequent snapshots > are already marked as deleted, the subsequent snapshots can be also actually > removed from the file system. In this JIRA, we implement a mechanism to > garbage collect these snapshots. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Description: HDFS-15488 adds a cmd to list all snapshots for a given snapshottable directory. A snapshot can be just marked as deleted with ordered deletion config set. This Jira aims to add deletion status to cmd output. SAMPLE OUTPUT: {noformat} sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshottableDir drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 2 65536 /user sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshot /user drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 1 ACTIVE /user/.snapshot/s1 drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:51 0 DELETED /user/.snapshot/s20200727-115156.407{noformat} was:HDFS-15488 adds a cmd to list all snapshots for a given snapshottable directory. A snapshot can be just marked as deleted with ordered deletion config set. This Jira aims to add deletion status to cmd output. > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15498.000.patch > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add deletion status to cmd output. > > SAMPLE OUTPUT: > {noformat} > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshottableDir > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 2 65536 /user > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshot /user > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 1 ACTIVE > /user/.snapshot/s1 > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:51 0 DELETED > /user/.snapshot/s20200727-115156.407{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Attachment: HDFS-15498.000.patch > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15498.000.patch > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add deletion status to cmd output. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Description: HDFS-15488 adds a cmd to list all snapshots for a given snapshottable directory. A snapshot can be just marked as deleted with ordered deletion config set. This Jira aims to add deletion status to cmd output. (was: HDFS-15488 adds a cmd to list all snapshots for a given snapshottable directory. A snapshot can be just marked as deleted with ordered deletion config set. This Jira aims to add an option to show the deletion status.) > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add deletion status to cmd output. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Summary: Show snapshots deletion status in snapList cmd (was: Add an option in snapList cmd to show snapshots deletion status) > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add an option to show the deletion status. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Add an option in snapList cmd to show snapshots deletion status
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Summary: Add an option in snapList cmd to show snapshots deletion status (was: Add an option in snapList cmd to show snapshots which are marked deleted) > Add an option in snapList cmd to show snapshots deletion status > --- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add an option to show the deletion status. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Add an option in snapList cmd to show snapshots which are marked deleted
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Description: HDFS-15488 adds a cmd to list all snapshots for a given snapshottable directory. A snapshot can be just marked as deleted with ordered deletion config set. This Jira aims to add an option to show the deletion status. > Add an option in snapList cmd to show snapshots which are marked deleted > > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add an option to show the deletion status. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15498) Add an option in snapList cmd to show snapshots which are marked deleted
Shashikant Banerjee created HDFS-15498: -- Summary: Add an option in snapList cmd to show snapshots which are marked deleted Key: HDFS-15498 URL: https://issues.apache.org/jira/browse/HDFS-15498 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Fix For: 3.4.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15488) Add a command to list all snapshots for a snaphottable root with snapshot Ids
[ https://issues.apache.org/jira/browse/HDFS-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15488: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add a command to list all snapshots for a snaphottable root with snapshot Ids > - > > Key: HDFS-15488 > URL: https://issues.apache.org/jira/browse/HDFS-15488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15488.000.patch > > > Currently, the way to list snapshots is do a ls on > /.snapshot directory. Since creation time is not > recorded , there is no way to actually figure out the chronological order of > snapshots. The idea here is to add a command to list snapshots for a > snapshottable directory along with snapshot Ids which grow monotonically as > snapshots are created in the system. With snapID, it will be helpful to > figure out the chronology of snapshots in the system. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable
[ https://issues.apache.org/jira/browse/HDFS-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15497: --- Attachment: HDFS-15497.000.patch > Make snapshot limit on global as well per snapshot root directory configurable > -- > > Key: HDFS-15497 > URL: https://issues.apache.org/jira/browse/HDFS-15497 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15497.000.patch > > > Currently, there is no configurable limit imposed on the no of snapshots > remaining in the system neither on the filesystem level nor on a snaphottable > root directory. Too many snapshots in the system can potentially bloat up the > namespace and with ordered deletion feature on , too many snapshots per > snapshottable root directory will make the deletion of the oldest snapshot > more expensive. This Jira aims to impose these configurable limits . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable
Shashikant Banerjee created HDFS-15497: -- Summary: Make snapshot limit on global as well per snapshot root directory configurable Key: HDFS-15497 URL: https://issues.apache.org/jira/browse/HDFS-15497 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Affects Versions: 3.4.0 Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Currently, there is no configurable limit imposed on the no of snapshots remaining in the system neither on the filesystem level nor on a snaphottable root directory. Too many snapshots in the system can potentially bloat up the namespace and with ordered deletion feature on , too many snapshots per snapshottable root directory will make the deletion of the oldest snapshot more expensive. This Jira aims to impose these configurable limits . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15488) Add a command to list all snapshots for a snaphottable root with snapshot Ids
[ https://issues.apache.org/jira/browse/HDFS-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15488: --- Attachment: HDFS-15488.000.patch > Add a command to list all snapshots for a snaphottable root with snapshot Ids > - > > Key: HDFS-15488 > URL: https://issues.apache.org/jira/browse/HDFS-15488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15488.000.patch > > > Currently, the way to list snapshots is do a ls on > /.snapshot directory. Since creation time is not > recorded , there is no way to actually figure out the chronological order of > snapshots. The idea here is to add a command to list snapshots for a > snapshottable directory along with snapshot Ids which grow monotonically as > snapshots are created in the system. With snapID, it will be helpful to > figure out the chronology of snapshots in the system. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15490: --- Attachment: HDFS-15490.000.patch > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15490: -- Assignee: Shashikant Banerjee > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15490: --- Status: Patch Available (was: Open) > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15490: --- Description: {code:java} ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public class FSDirXAttrOp {:1: Utility classes should not have a public or default constructor. [HideUtilityClassConstructor] ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: static final String xattrName = "user.a1";:23: Name 'xattrName' must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] {code} > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Priority: Major > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
Shashikant Banerjee created HDFS-15490: -- Summary: Address checkstyle issues reported with HDFS-15480 Key: HDFS-15490 URL: https://issues.apache.org/jira/browse/HDFS-15490 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Shashikant Banerjee -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15480: --- Attachment: HDFS-15480.002.patch > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch, > HDFS-15480.002.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15488) Add a command to list all snapshots for a snaphottable root with snapshot Ids
[ https://issues.apache.org/jira/browse/HDFS-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15488: --- Summary: Add a command to list all snapshots for a snaphottable root with snapshot Ids (was: Add. a command to list all snapshots for a snaphottable root with snap Ids) > Add a command to list all snapshots for a snaphottable root with snapshot Ids > - > > Key: HDFS-15488 > URL: https://issues.apache.org/jira/browse/HDFS-15488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, the way to list snapshots is do a ls on > /.snapshot directory. Since creation time is not > recorded , there is no way to actually figure out the chronological order of > snapshots. The idea here is to add a command to list snapshots for a > snapshottable directory along with snapshot Ids which grow monotonically as > snapshots are created in the system. With snapID, it will be helpful to > figure out the chronology of snapshots in the system. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15488) Add. a command to list all snapshots for a snaphottable root with snap Ids
Shashikant Banerjee created HDFS-15488: -- Summary: Add. a command to list all snapshots for a snaphottable root with snap Ids Key: HDFS-15488 URL: https://issues.apache.org/jira/browse/HDFS-15488 Project: Hadoop HDFS Issue Type: Sub-task Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Currently, the way to list snapshots is do a ls on /.snapshot directory. Since creation time is not recorded , there is no way to actually figure out the chronological order of snapshots. The idea here is to add a command to list snapshots for a snapshottable directory along with snapshot Ids which grow monotonically as snapshots are created in the system. With snapID, it will be helpful to figure out the chronology of snapshots in the system. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15483) Ordered snapshot deletion: Disallow rename between two snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15483: -- Assignee: Shashikant Banerjee > Ordered snapshot deletion: Disallow rename between two snapshottable > directories > > > Key: HDFS-15483 > URL: https://issues.apache.org/jira/browse/HDFS-15483 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > With the ordered snapshot deletion feature, only the *earliest* snapshot can > be actually deleted from the file system. If renaming between snapshottable > directories is allowed, only the earliest snapshot among all the > snapshottable directories can be actually deleted. In such case, individual > snapshottable directory may not be able to free up the resources by itself. > Therefore, we propose disallowing renaming between snapshottable directories > in this JIRA. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users
[ https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15482: -- Assignee: Shashikant Banerjee > Ordered snapshot deletion: hide the deleted snapshots from users > > > Key: HDFS-15482 > URL: https://issues.apache.org/jira/browse/HDFS-15482 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > In HDFS-15480, the behavior of deleting the non-earliest snapshots is > changed to marking them as deleted in XAttr but not actually deleting them. > The users are still able to access the these snapshots as usual. > In this JIRA, the marked-for-deletion snapshots are hided so that they become > inaccessible > to users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161951#comment-17161951 ] Shashikant Banerjee commented on HDFS-15480: Thanks [~umamaheswararao]/[~msingh] for the review. The review comments are addressed here: [https://github.com/apache/hadoop/pull/2163] > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161910#comment-17161910 ] Shashikant Banerjee commented on HDFS-15480: Thanks [~szetszwo] for the review comments. Patch v1 addresses the comments along with [~msingh] comments here: [https://github.com/apache/hadoop/pull/2156] > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15480: --- Status: Patch Available (was: Open) > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15480: --- Attachment: HDFS-15480.001.patch > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature
[ https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15479. Fix Version/s: 3.4.0 Resolution: Fixed > Ordered snapshot deletion: make it a configurable feature > - > > Key: HDFS-15479 > URL: https://issues.apache.org/jira/browse/HDFS-15479 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15479_20200719.patch > > > Ordered snapshot deletion is a configurable feature. In this JIRA, a conf is > added. > When the feature is enabled, only the earliest snapshot can be deleted. For > deleting the non-earliest snapshots, the behavior is temporarily changed to > throwing an exception in this JIRA. In HDFS-15480, the behavior of deleting > the non-earliest snapshots will be changed to marking them as deleted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15470) Added more unit tests to validate rename behaviour across snapshots
[ https://issues.apache.org/jira/browse/HDFS-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15470: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~jnp] for the review. I have committed this. > Added more unit tests to validate rename behaviour across snapshots > --- > > Key: HDFS-15470 > URL: https://issues.apache.org/jira/browse/HDFS-15470 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.0.4 > > Attachments: HDFS-15470.000.patch, HDFS-15470.001.patch, > HDFS-15470.002.patch > > > HDFS-15313 fixes a critical issue which will avoid deletion of data in active > fs with a sequence of snapshot deletes. The idea is to add more tests to > verify the behaviour. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature
[ https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161319#comment-17161319 ] Shashikant Banerjee commented on HDFS-15479: Thanks [~szetszwo] for working on this. The changes look ok . {code:java} final Snapshot earliest = snapshottable.getSnapshotList().get(0); {code} I think the snapshot list is sorted by name and the 1st element in the list may not be the earliest snapshot ? can you plz check? > Ordered snapshot deletion: make it a configurable feature > - > > Key: HDFS-15479 > URL: https://issues.apache.org/jira/browse/HDFS-15479 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Attachments: h15479_20200719.patch > > > Ordered snapshot deletion is a configurable feature. In this JIRA, a conf is > added. > When the feature is enabled, only the earliest snapshot can be deleted. For > deleting the non-earliest snapshots, the behavior is temporarily changed to > throwing an exception in this JIRA. In HDFS-15480, the behavior of deleting > the non-earliest snapshots will be changed to marking them as deleted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature
[ https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161319#comment-17161319 ] Shashikant Banerjee edited comment on HDFS-15479 at 7/20/20, 3:19 PM: -- Thanks [~szetszwo] for working on this. {code:java} final Snapshot earliest = snapshottable.getSnapshotList().get(0); {code} I think the snapshot list is sorted by name and the 1st element in the list may not be the earliest snapshot ? can you plz check? was (Author: shashikant): Thanks [~szetszwo] for working on this. The changes look ok . {code:java} final Snapshot earliest = snapshottable.getSnapshotList().get(0); {code} I think the snapshot list is sorted by name and the 1st element in the list may not be the earliest snapshot ? can you plz check? > Ordered snapshot deletion: make it a configurable feature > - > > Key: HDFS-15479 > URL: https://issues.apache.org/jira/browse/HDFS-15479 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Attachments: h15479_20200719.patch > > > Ordered snapshot deletion is a configurable feature. In this JIRA, a conf is > added. > When the feature is enabled, only the earliest snapshot can be deleted. For > deleting the non-earliest snapshots, the behavior is temporarily changed to > throwing an exception in this JIRA. In HDFS-15480, the behavior of deleting > the non-earliest snapshots will be changed to marking them as deleted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161315#comment-17161315 ] Shashikant Banerjee commented on HDFS-15480: HDFS-15480.000.patch–> 1st patch. Will add more tests. > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org