smengcl opened a new pull request #2448: URL: https://github.com/apache/hadoop/pull/2448
This is an addendum to [HDFS-15607|https://issues.apache.org/jira/browse/HDFS-15607]. ## Problem Pre HDFS-15607, when admin disallows snapshot on a **file**, it throws `PathIsNotDirectoryException`: ``` org.apache.hadoop.fs.PathIsNotDirectoryException: `/ssdir1/file1': Is not a directory at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:65) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.resetSnapshottable(SnapshotManager.java:289) at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.disallowSnapshot(FSDirSnapshotOp.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.disallowSnapshot(FSNamesystem.java:6933) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.disallowSnapshot(NameNodeRpcServer.java:1969) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.disallowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:1321) ``` After HDFS-15607 (current), the thrown exception changed into `AccessControlException`. Because `DFS#checkTrashRootAndRemoveIfEmpty` is calling `DFS#listStatus` on a file (`/ssdir1/file1/.Trash`) inside a file (`/ssdir1/file1`): ``` 2020-11-09 09:50:18,374 [IPC Server handler 3 on default port 52295] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(8708)) - allowed=false ugi=smeng (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/ssdir1/file1/.Trash dst=null perm=null proto=rpc 2020-11-09 09:50:18,374 [IPC Server handler 3 on default port 52295] INFO ipc.Server (Server.java:logException(3006)) - IPC Server handler 3 on default port 52295, call Call#31 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 127.0.0.1:52305: org.apache.hadoop.security.AccessControlException: /ssdir1/file1 (is not a directory) disallowSnapshot: /ssdir1/file1 (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:739) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:57) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4132) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1175) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:758) ``` ## Solution We should ignore `AccessControlException` thrown in `DFS#checkTrashRootAndRemoveIfEmpty` and let the original `dfs.disallowSnapshot` logic handle it. ## Testing After this change, disallowing snapshot on a file throws `PathIsNotDirectoryException` as expected: ``` 2020-11-09 09:49:15,347 [IPC Server handler 4 on default port 52270] INFO ipc.Server (Server.java:logException(3013)) - IPC Server handler 4 on default port 52270, call Call#30 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.disallowSnapshot from 127.0.0.1:52280 org.apache.hadoop.fs.PathIsNotDirectoryException: `/ssdir1/file1': Is not a directory at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:65) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.resetSnapshottable(SnapshotManager.java:289) at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.disallowSnapshot(FSDirSnapshotOp.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.disallowSnapshot(FSNamesystem.java:6933) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.disallowSnapshot(NameNodeRpcServer.java:1969) ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
