[
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568147#comment-14568147
]
Ted Yu commented on HBASE-13625:
--------------------------------
With this change, if /user/$\{user.name\} does not exist and
SecureBulkLoadEndpoint is loaded, region server would shutdown due to:
{code}
Caused by: org.apache.hadoop.security.AccessControlException: Permission
denied: user=hbase, access=WRITE,
inode="/user/hbase/hbase-staging":hdfs:hdfs:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
...
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=hbase, access=WRITE,
inode="/user/hbase/hbase-staging":hdfs:hdfs:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
{code}
> Use HDFS for HFileOutputFormat2 partitioner's path
> --------------------------------------------------
>
> Key: HBASE-13625
> URL: https://issues.apache.org/jira/browse/HBASE-13625
> Project: HBase
> Issue Type: Bug
> Components: mapreduce
> Affects Versions: 2.0.0, 1.1.0, 1.2.0
> Reporter: Stephen Yuan Jiang
> Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13625-v2.patch, HBASE-13625.patch
>
>
> HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's
> path to 'hadoop.tmp.dir'. This breaks unit test in Windows.
> {code}
> static void configurePartitioner(Job job, List<ImmutableBytesWritable>
> splitPoints)
> ...
> // create the partitions file
> - FileSystem fs = FileSystem.get(job.getConfiguration());
> - Path partitionsPath = new Path("/tmp", "partitions_" +
> UUID.randomUUID());
> + FileSystem fs = FileSystem.get(conf);
> + Path partitionsPath = new Path(conf.get("hadoop.tmp.dir"), "partitions_"
> + UUID.randomUUID());
> {code}
> Here is the exception from 1 of the UTs when running against Windows (from
> branch-1.1) - The ':' is an invalid character in windows file path:
> {code}
> java.lang.IllegalArgumentException: Pathname
> /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
> from
> C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
> is not a valid DFS filename.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
> at
> org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1074)
> at
> org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(SequenceFile.java:1374)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
> at
> org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
> at
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
> {code}
> The proposed fix is to use a config to point to a hdfs directory.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)