[ 
https://issues.apache.org/jira/browse/HBASE-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433522#comment-16433522
 ] 

Hadoop QA commented on HBASE-18620:
-----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} branch-1 passed with JDK v1.8.0_163 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.7.0_171. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1 passed with JDK v1.8.0_163 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_163 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_171. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_171. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  0m 
55s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m 
48s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_163 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.regionserver.TestGlobalThrottler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-18620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918510/HBASE-18620-branch-1-v2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a4b6b1fe1de5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-1 / de0dd9e |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_171 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_163 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_171 |
| compile | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/branch-compile-hbase-server-jdk1.7.0_171.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_171.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_171.txt
 |
| hadoopcheck | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/patch-javac-2.4.1.txt
 |
| hadoopcheck | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/patch-javac-2.5.2.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/testReport/ |
| Max. process+thread count | 3573 (vs. ulimit of 10000) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12386/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Secure bulkload job fails when HDFS umask has limited scope
> -----------------------------------------------------------
>
>                 Key: HBASE-18620
>                 URL: https://issues.apache.org/jira/browse/HBASE-18620
>             Project: HBase
>          Issue Type: Bug
>          Components: security
>            Reporter: Pankaj Kumar
>            Assignee: Pankaj Kumar
>            Priority: Major
>             Fix For: 1.5.0
>
>         Attachments: HBASE-18620-branch-1-v2.patch, HBASE-18620-branch-1.patch
>
>
> By default "hbase.fs.tmp.dir" parameter value is 
> /user/$\{user.name}/hbase-staging.
> RegionServer creates the staging directory (hbase.bulkload.staging.dir, 
> default value is hbase.fs.tmp.dir) during opening a region as below when 
> SecureBulkLoadEndpoint configured in hbase.coprocessor.region.classes,
> {noformat}
> drwx------ - hbase hadoop 0 2017-08-12 13:55 /user/xyz
> drwx--x--x - hbase hadoop 0 2017-08-12 13:55 /user/xyz/hbase-staging
> drwx--x--x - hbase hadoop 0 2017-08-12 13:55 
> /user/xyz/hbase-staging/DONOTERASE
> {noformat}
> Here,
> 1. RegionServer is started using "xyz" linux user.
> 2. HDFS umask (fs.permissions.umask-mode) has been set as 077, so file/dir 
> permission will not be wider than 700. "/user/xyz" directory (doesn't exist 
> earlier) permission will be 700 and "/user/xyz/hbase-staging" will be 711 as 
> we are just setting permission of staging directory not the parent 
> directories which are created (fs.mkdirs()) by RegionServer.
> Secure bulkload will fail as other user doesn't have EXECUTE permission on 
> "/user/xyz" directory.
> *Steps to reproduce:*
> ==================
> 1. Configure org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint 
> in "hbase.coprocessor.region.classes" at client side.
> 2. Login to machine as "root" linux user.
> 3. kinit to any kerberos user except RegionServer kerberos user (say admin).
> 4. ImportTSV will create the user temp directory (hbase.fs.tmp.dir) while 
> writing partition file, 
> {noformat}
> drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root
> drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root/hbase-staging
> {noformat}
> 4. During LoadIncrementalHFiles job,
> - a. prepareBulkLoad() step - Random dir will be created by RegionServer 
> credentials,
> {noformat}
> drwxrwxrwx - hbase hadoop 0 2017-08-12 14:58 
> /user/xyz/hbase-staging/hbase__t1__e67b23m2ghe6fkn1bqrb95ak41ferj8957cdhsep4ebmpohm22nvi54vh8g3qh1
> {noformat}
> - b. secureBulkLoadHFiles() step - Family dir existence check and creation is 
> done by using client user credentials. Here client operation will fail as 
> below,
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=admin, access=EXECUTE, 
> inode="/user/xyz/hbase-staging/admin__t1__e1f3m4r2prud9117thg5pdg91lkg0le0fdvtbbpg03epqg0f14lv54j8sqd8s0n6/cf1":hbase:hadoop:drwx------
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:342)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:279)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:223)
>       at 
> com.huawei.hadoop.adapter.hdfs.plugin.HWAccessControlEnforce.checkPermission(HWAccessControlEnforce.java:69)
> {noformat}
> So the root cause is "admin" user doesn't have EXECUTE permission over 
> "/user/xyz", because RegionServer has created this intermediate parent 
> directory during opening (SecureBulkLoadEndpoint) a region where the default 
> permission is set as 700 based on the hdfs UMASK 077.
> *Solution:*
> =========
> However it can be handled by the creating /user/xyz manually and setting 
> sufficient permission explicitly. But we should handle this by setting 
> sufficient permission to intermediate staging directories which is created by 
> RegionServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to