[
https://issues.apache.org/jira/browse/HDFS-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14175997#comment-14175997
]
Hudson commented on HDFS-7242:
------------------------------
FAILURE: Integrated in Hadoop-Hdfs-trunk #1905 (See
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1905/])
HDFS-7242. Code improvement for FSN#checkUnreadableBySuperuser. (Contributed by
Yi Liu) (vinayakumarb: rev 1c3ff0b7c892b9d70737c375fb6f4a6fc6dd6d81)
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Code improvement for FSN#checkUnreadableBySuperuser
> ---------------------------------------------------
>
> Key: HDFS-7242
> URL: https://issues.apache.org/jira/browse/HDFS-7242
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Affects Versions: 2.6.0
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7242.001.patch
>
>
> _checkUnreadableBySuperuser_ is to check whether super user can access
> specific path. The code logic is not efficient. It does iteration check for
> all user, actually we just need to check _super user_ and can save few cpu
> cycle.
> {code}
> private void checkUnreadableBySuperuser(FSPermissionChecker pc,
> INode inode, int snapshotId)
> throws IOException {
> for (XAttr xattr : dir.getXAttrs(inode, snapshotId)) {
> if (XAttrHelper.getPrefixName(xattr).
> equals(SECURITY_XATTR_UNREADABLE_BY_SUPERUSER)) {
> if (pc.isSuperUser()) {
> throw new AccessControlException("Access is denied for " +
> pc.getUser() + " since the superuser is not allowed to " +
> "perform this operation.");
> }
> }
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)