[
https://issues.apache.org/jira/browse/HBASE-15966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333243#comment-15333243
]
Hadoop QA commented on HBASE-15966:
-----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s
{color} | {color:blue} The patch file was not named according to hbase's naming
conventions. Please see
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s
{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}
31m 3s {color} | {color:green} Patch does not cause any errors with Hadoop
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 34s
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
16s {color} | {color:green} Patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 130m 19s {color}
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12809038/15966.v1.txt |
| JIRA Issue | HBASE-15966 |
| Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck
hbaseanti checkstyle compile |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
|
| git revision | master / f19f1d9 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
| Test Results |
https://builds.apache.org/job/PreCommit-HBASE-Build/2242/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output |
https://builds.apache.org/job/PreCommit-HBASE-Build/2242/console |
| Powered by | Apache Yetus 0.2.1 http://yetus.apache.org |
This message was automatically generated.
> Bulk load unable to read HFiles from different filesystem type than
> fs.defaultFS
> --------------------------------------------------------------------------------
>
> Key: HBASE-15966
> URL: https://issues.apache.org/jira/browse/HBASE-15966
> Project: HBase
> Issue Type: Bug
> Components: hbase, HFile
> Affects Versions: 0.98.4
> Environment: Microsoft Azure HDInsight 3.2 cluster with eight hosts
> - Ubuntu 12.04.5
> - HDP 2.2
> - Hadoop 2.6.0
> - HBase 0.98.4
> Reporter: Dustin Christmann
> Assignee: Ted Yu
> Attachments: 15966.v1.txt
>
>
> In a YARN job, I am creating HFiles with code that has been cribbed from the
> TableOutputFormat class and bulkloading them with
> LoadIncrementalHFiles.doBulkLoad.
> On other clusters, where fs.defaultFS is set to an hdfs: URI, and my HFiles
> are placed in an hdfs: URI, the bulkload works as intended.
> On this particular cluster, where fs.defaultFS is set to a wasb: URI and my
> HFiles are placed in a wasb: URI, the bulkload also works as intended.
> However, on this same cluster, whenever I place the HFiles in an hdfs: URI, I
> get the following logs in my application from the HBase client logging
> repeatedly:
> [02 Jun 23:23:26.002](20259/140062246807296)
> Info2:org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles: Trying to load
> hfile=hdfs://[my cluster]/[my path] first=\x00\x00\x11\x06 last=;\x8B\x85\x18
> [02 Jun 23:23:26.002](20259/140062245754624)
> Info3:org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles: Going to
> connect to server region=[my namespace]:[my
> table],,1464909723920.00eafdb73989312bd8864f0913255f50.,
> hostname=10.0.1.6,16020,1464698786237, seqNum=2 for row with hfile group
> [{[B@4d0409e7,hdfs://[my cluster]/[my path]}]
> [02 Jun 23:23:26.012](20259/140062245754624)
> Info1:org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles: Attempt to
> bulk load region containing into table [my namespace]:[my table] with files
> [family:[my family] path:hdfs://[my cluster]/[my path]] failed. This is
> recoverable and they will be retried.
> [02 Jun 23:23:26.019](20259/140061634982912)
> Info2:org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles: Split occured
> while grouping HFiles, retry attempt 2 with 1 files remaining to group or
> split
> And when I look at the appropriate region server's log, I find the following
> exception repeatedly:
> 2016-06-02 20:22:50,771 ERROR
> [B.DefaultRpcServer.handler=22,queue=2,port=16020]
> access.SecureBulkLoadEndpoint: Failed to complete bulk load
> java.io.FileNotFoundException: File doesn't exist: hdfs://[my cluster]/[my
> path] at
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.setPermission(NativeAzureFileSystem.java:2192)
> at
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:280)
> at
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
> at
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:270)
> at
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6986)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3456)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3438)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2080)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> Looking at the appropriate code in SecureBulkLoadEndpoint.java, I'm finding
> the following:
> public Boolean run() {
> FileSystem fs = null;
> try {
> Configuration conf = env.getConfiguration();
> fs = FileSystem.get(conf);
> for(Pair<byte[], String> el: familyPaths) {
> Path p = new Path(el.getSecond());
> Path stageFamily = new Path(bulkToken,
> Bytes.toString(el.getFirst()));
> if(!fs.exists(stageFamily)) {
> fs.mkdirs(stageFamily);
> fs.setPermission(stageFamily, PERM_ALL_ACCESS);
> }
> }
> The call to FileSystem.get is obviously the culprit, since it gets the
> FileSystem object based on fs.defaultFS, which is suboptimal in this case and
> other cases where the HFiles are located on a different type of filesystem
> than the defaultFS.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)