[jira] [Commented] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-12 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541599#comment-16541599
 ] 

Jack Bearden commented on HDFS-13722:
-

Thanks [~aw]

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Fix For: 3.2.0
>
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-05 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HDFS-13722:

Labels: trunk  (was: )

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-05 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HDFS-13722:

Attachment: HDFS-13722.001.patch
Status: Patch Available  (was: Open)

I managed to fix this issue with the following patch.

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Priority: Minor
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-05 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HDFS-13722:

Attachment: (was: HDFS-13722.001.patch)

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Priority: Minor
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-05 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HDFS-13722:

Attachment: HDFS-13722.001.patch

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Priority: Minor
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-05 Thread Jack Bearden (JIRA)
Jack Bearden created HDFS-13722:
---

 Summary: HDFS Native Client Fails Compilation on Ubuntu 18.04
 Key: HDFS-13722
 URL: https://issues.apache.org/jira/browse/HDFS-13722
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jack Bearden


When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc fails.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-14 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291329#comment-16291329
 ] 

Jack Bearden commented on HDFS-10673:
-

Hey folks, this is the fix in trunk. Can we pull this into 2.7.5?

> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290484#comment-16290484
 ] 

Jack Bearden commented on HDFS-10673:
-

Since the NullPointerException is being thrown from a rather benign function, 
it seems safe to just guard against the null element at position 0. I could 
very well be mistaken, however. The following code change appears to remedy the 
edge case on my dev cluster:

{code}
// FSPermissionChecker#getINodeAttrs
if (i == 0 && pathByNameArr[i] == null) {
elements[i] = "";
} else {
elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
}
{code}


> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290222#comment-16290222
 ] 

Jack Bearden edited comment on HDFS-10673 at 12/14/17 2:02 AM:
---

Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}

[This|https://github.com/apache/hadoop/blob/branch-2.7.4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L244]
 is where the code diverges when an {{INodeAttributeProvider}} is provided.

[This|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L179]
 is the null check that was in 2.7.3 and removed in 2.7.4.

I only encounter the NullPointerException when doing a {{hdfs dfs -ls /}} and 
other permutations of it.

[~zhz] could you please take a look?

Edit:
Also, I forgot to mention, this case only occurs for regular users that are not 
{{FSPermissionChecker#isSuperUser()}}


was (Author: jackbearden):
Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}


[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290222#comment-16290222
 ] 

Jack Bearden commented on HDFS-10673:
-

Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}

[This|https://github.com/apache/hadoop/blob/branch-2.7.4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L244]
 is where the code diverges when an {{INodeAttributeProvider}} is provided.

[This|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L179]
 is the null check that was in 2.7.3 and removed in 2.7.4.

I only encounter the NullPointerException when doing a {{hdfs dfs -ls /}} and 
other permutations of it.

[~zhz] could you please take a look?


> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org