[ 
https://issues.apache.org/jira/browse/AMBARI-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15302802#comment-15302802
 ] 

Hudson commented on AMBARI-16844:
---------------------------------

FAILURE: Integrated in Ambari-trunk-Commit #4933 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/4933/])
AMBARI-16844. HBase backups fail if there is no /user/hbase directory in 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=f96969a4543ef93e5c4e0c0e4f998c8f9ebd598c])
* 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py
* 
ambari-common/src/main/python/resource_management/libraries/functions/stack_features.py
* 
ambari-common/src/main/python/resource_management/libraries/functions/constants.py
* 
ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json
* 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py


> Create directory for hbase user when hbase is deployed
> ------------------------------------------------------
>
>                 Key: AMBARI-16844
>                 URL: https://issues.apache.org/jira/browse/AMBARI-16844
>             Project: Ambari
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Andrew Onischuk
>             Fix For: 2.4.0
>
>         Attachments: AMBARI-16844.patch
>
>
> HBase backups fail if there is no /user/\{hbase_user\} directory in HDFS.
> {code}
> 2016-05-18 00:05:41,051 ERROR [ProcedureExecutorThread-1] 
> snapshot.ExportSnapshot: Snapshot export failed
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hbase, access=WRITE, inode="/user/hbase/.staging":hdfs:hdfs:drwxr-xr-x
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1813)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1797)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1780)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4002)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1098)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
>       at java.security.AccessController.doPrivileged(Native Method
> {code}
> /user/\{hbase_user\} directory should be created automatically.
> hbase_user is the name of the hbase user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to