[
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16290222#comment-16290222
]
Jack Bearden commented on HDFS-10673:
-------------------------------------
Hey guys, thanks a lot for your work on this great optimization.
There may be an edge case that is not being handled by the refactored code in
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following
NullPointerException:
{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}
[This|https://github.com/apache/hadoop/blob/branch-2.7.4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L244]
is where the code diverges when an {{INodeAttributeProvider}} is provided.
[This|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L179]
is the null check that was in 2.7.3 and removed in 2.7.4.
I only encounter the NullPointerException when doing a {{hdfs dfs -ls /}} and
other permutations of it.
[~zhz] could you please take a look?
> Optimize FSPermissionChecker's internal path usage
> --------------------------------------------------
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs
> Reporter: Daryn Sharp
> Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch,
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade
> performance and generate excessive garbage even when neither is used. Main
> issues:
> # A byte[][] of components is unnecessarily created. Each path component
> lookup converts a subrange of the byte[][] to a new String[] - then not used
> by default attribute provider.
> # Subaccess checks are insanely expensive. The full path of every subdir is
> created by walking up the inode tree, creating a INode[], building a string
> by converting each inode's byte[] name to a string, etc. Which will only be
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer
> feature. For #2, paths should be created on-demand for exceptions.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]