[ 
https://issues.apache.org/jira/browse/HBASE-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-16521.
----------------------------
      Resolution: Fixed
    Hadoop Flags: Reviewed

Thanks for the review, Vlad.

> Restore operation would fail if the hbase.tmp.dir directory is absent or 
> doesn't have proper permission
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16521
>                 URL: https://issues.apache.org/jira/browse/HBASE-16521
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>              Labels: backup
>         Attachments: 16521.v1.txt, 16521.v2.txt
>
>
> I ran backup IT test and bumped into the following:
> {code}
> 2016-08-29 20:38:31,390 INFO  [main] mapreduce.Job: Job 
> job_1472498400634_0004 failed with state FAILED due to: Job setup failed : 
> org.apache.hadoop.security.AccessControlException: Permission denied:   
> user=hbase, access=WRITE, 
> inode="/tmp/hbase-hbase/bulk_output-default-IntegrationTestBackupRestore.table1-1472503079471/_temporary/1":hdfs:hdfs:drwxr-xr-x
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
> {code}
> Here is related code in MapReduceRestoreService :
> {code}
>   private Path getBulkOutputDir(String tableName) throws IOException
>   {
>     Configuration conf = getConf();
>     FileSystem fs = FileSystem.get(conf);
>     String tmp = conf.get("hbase.tmp.dir");
>     Path path =  new Path(tmp + Path.SEPARATOR + "bulk_output-"+tableName + 
> "-"
>         + EnvironmentEdgeManager.currentTime());
> {code}
> conf.get("hbase.tmp.dir") returned /tmp/hbase-hbase which was not created on 
> hdfs.
> We should use hbase.fs.tmp.dir as the base dir to avoid the above permission 
> error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to