[jira] [Commented] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966336#comment-14966336
 ] 

Hadoop QA commented on HADOOP-10406:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 5s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.test.TestTimedOutTestsListener |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-21 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767727/HADOOP-10406.002.patch
 |
| JIRA Issue | HADOOP-10406 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 38d64960284e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-5d4f0d0/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 0c4af0f |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:

[jira] [Updated] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-10406:
---
Status: Patch Available  (was: Open)

Thanks Andrew for the comment! Updated patch 002 to use {{GenericTestUtils}}.

> TestIPC.testIpcWithReaderQueuing may fail
> -
>
> Key: HADOOP-10406
> URL: https://issues.apache.org/jira/browse/HADOOP-10406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiao Chen
> Attachments: HADOOP-10406.001.patch, HADOOP-10406.002.patch
>
>
> The test may fail with AssertionError.  The value 
> server.getNumOpenConnections() could be larger than maxAccept; see comments 
> for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-10406:
---
Attachment: HADOOP-10406.002.patch

> TestIPC.testIpcWithReaderQueuing may fail
> -
>
> Key: HADOOP-10406
> URL: https://issues.apache.org/jira/browse/HADOOP-10406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiao Chen
> Attachments: HADOOP-10406.001.patch, HADOOP-10406.002.patch
>
>
> The test may fail with AssertionError.  The value 
> server.getNumOpenConnections() could be larger than maxAccept; see comments 
> for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-10406:
---
Status: Open  (was: Patch Available)

> TestIPC.testIpcWithReaderQueuing may fail
> -
>
> Key: HADOOP-10406
> URL: https://issues.apache.org/jira/browse/HADOOP-10406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiao Chen
> Attachments: HADOOP-10406.001.patch
>
>
> The test may fail with AssertionError.  The value 
> server.getNumOpenConnections() could be larger than maxAccept; see comments 
> for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12492) maven install triggers bats test

2015-10-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966110#comment-14966110
 ] 

Allen Wittenauer commented on HADOOP-12492:
---

More importantly:

Before:

https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/artifact/patchprocess/branch-mvninstall-root.txt

After:

https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt

Unit tests should still trigger bats normally, but Yetus is smart and turns it 
off.

> maven install triggers bats test
> 
>
> Key: HADOOP-12492
> URL: https://issues.apache.org/jira/browse/HADOOP-12492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>
> Yetus running maven install with bats installed triggers 
> common-test-bats-driver, even if -DskipTests is turned on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12496) AWS SDK version

2015-10-20 Thread Yongjia Wang (JIRA)
Yongjia Wang created HADOOP-12496:
-

 Summary: AWS SDK version
 Key: HADOOP-12496
 URL: https://issues.apache.org/jira/browse/HADOOP-12496
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Yongjia Wang


hadoop-aws jar still depends on the very old 1.7.4 version of aws-java-sdk.
In newer versions of SDK, there is incompatible API changes that leads to the 
following error when trying to use the S3A class and newer versions of sdk 
presents.
This is because S3A is calling the method with "int" as the parameter type 
while the new SDK is expecting "long". This makes it impossible to use kinesis 
+ s3a in the same process.
It would be very helpful to upgrade hadoop-awas's aws-sdk version.

java.lang.NoSuchMethodError: 
com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:285)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:130)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:104)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:29)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:36)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:38)
at $iwC$$iwC$$iwC$$iwC.(:40)
at $iwC$$iwC$$iwC.(:42)
at $iwC$$iwC.(:44)
at $iwC.(:46)
at (:48)
at .(:52)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at 
org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
at 
org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
at 
org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
at 
org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966098#comment-14966098
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2456 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2456/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966086#comment-14966086
 ] 

Hadoop QA commented on HADOOP-12494:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestRecoverStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767682/HADOOP-12494.patch |
| JIRA Issue | HADOOP-12494 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  

[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966063#comment-14966063
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 15s 
{color} | {color:red} hadoop-common-project_hadoop-common-jdk1.8.0_60 with JDK 
v1.8.0_60 has problems. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-21 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767702/HADOOP-12492.00.patch 
|
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  |
| uname | Linux 547f1282277c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-58cf712/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 0c4af0f |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| javadoc | hadoop-common-project_hadoop-common-jdk1.8.0_60: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7900/artifact/patchprocess/javadoc-hadoop-common-project_hadoop-common-jdk1.8.0_60-diff.txt
 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7900/testReport/ |
| Max memory used | 227MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7900/console |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12493.00.patch)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12492.00.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch, HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966010#comment-14966010
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2507 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2507/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965985#comment-14965985
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #519 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/519/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965949#comment-14965949
 ] 

Hudson commented on HADOOP-12418:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1294/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965887#comment-14965887
 ] 

Allen Wittenauer commented on HADOOP-12494:
---

[~owen.omalley], you might want to look at this.  Thanks. :)

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12494:
--
Hadoop Flags: Incompatible change

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HADOOP-12494:

Attachment: HADOOP-12494.patch

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HADOOP-12494:

Status: Patch Available  (was: In Progress)

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12494 started by HeeSoo Kim.
---
> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HADOOP-12494:

Attachment: HADOOP-12494

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim reassigned HADOOP-12494:
---

Assignee: HeeSoo Kim

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965868#comment-14965868
 ] 

Hadoop QA commented on HADOOP-12495:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 32s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767659/HADOOP-12495.00.patch 
|
| JIRA Issue | HADOOP-12495 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux cf0081e3ba94 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchp

[jira] [Commented] (HADOOP-12492) maven install triggers bats test

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965841#comment-14965841
 ] 

Hadoop QA commented on HADOOP-12492:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 9s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767663/HADOOP-12492.00.patch 
|
| JIRA Issue | HADOOP-12492 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  |
| uname | Linux e12502dc53da 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-4ec64a8/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 6c8b6f3 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/testReport/ |
| Max memory used | 227MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7894/console |


This message was automatically generated.



> maven install triggers bats test
> 
>
> Key: HADOOP-12492
> URL: https://issues.apache.org/jira/browse/HADOOP-12492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>
> Yetus running maven install with bats i

[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 4m 22s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-19 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767499/HADOOP-12493.00.patch 
|
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  unit  shellcheck  |
| uname | Linux 4e2511941c75 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-30c4bc4/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 8175c4f |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| shellcheck | v0.4.1 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7881/testReport/ |
| Max memory used | 39MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7881/console |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 52s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed TAP tests | hadoop_add_to_classpath_userpath.bats.tap |
|   | hadoop_add_to_classpath_userpath.bats.tap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-19 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767492/HADOOP-12493.00.patch 
|
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  unit  shellcheck  |
| uname | Linux a607fc827a24 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-30c4bc4/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 5068a25 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| shellcheck | v0.4.1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/testReport/ |
| TAP logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-hadoop-common-project_hadoop-common-jdk1.8.0_60.tap
 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/artifact/patchprocess/patch-hadoop-common-project_hadoop-common-jdk1.7.0_79.tap
 |
| Max memory used | 38MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7879/console |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12334.06.patch)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12334.06.patch)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 45s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-19 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767499/HADOOP-12493.00.patch 
|
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  unit  shellcheck  |
| uname | Linux 390790f3637d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-30c4bc4/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 8175c4f |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| shellcheck | v0.4.1 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7880/testReport/ |
| Max memory used | 38MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7880/console |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12493.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965816#comment-14965816
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #574 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/574/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965806#comment-14965806
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #559 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/559/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemanja Matkovic updated HADOOP-12491:
--
Resolution: Implemented
Status: Resolved  (was: Patch Available)

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965792#comment-14965792
 ] 

Hudson commented on HADOOP-12418:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8672 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8672/])
HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently. (kihwal: 
rev 01b103f4ff2e8ee7e71d082885436c5cb7c6be0b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12492) maven install triggers bats test

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12492:
--
Attachment: HADOOP-12492.00.patch

> maven install triggers bats test
> 
>
> Key: HADOOP-12492
> URL: https://issues.apache.org/jira/browse/HADOOP-12492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>
> Yetus running maven install with bats installed triggers 
> common-test-bats-driver, even if -DskipTests is turned on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12492) maven install triggers bats test

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12492:
-

Assignee: Allen Wittenauer

> maven install triggers bats test
> 
>
> Key: HADOOP-12492
> URL: https://issues.apache.org/jira/browse/HADOOP-12492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>
> Yetus running maven install with bats installed triggers 
> common-test-bats-driver, even if -DskipTests is turned on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12492) maven install triggers bats test

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12492:
--
Status: Patch Available  (was: Open)

> maven install triggers bats test
> 
>
> Key: HADOOP-12492
> URL: https://issues.apache.org/jira/browse/HADOOP-12492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12492.00.patch
>
>
> Yetus running maven install with bats installed triggers 
> common-test-bats-driver, even if -DskipTests is turned on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12122) Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12122:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Resolving in favor of the per module issues.

> Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HADOOP-12122
> URL: https://issues.apache.org/jira/browse/HADOOP-12122
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nate Edel
>Assignee: Nemanja Matkovic
> Attachments: HADOOP-12122-HADOOP-11890.0.patch, 
> HADOOP-12122-HADOOP-11890.3.patch, HADOOP-12122-HADOOP-11890.4.patch, 
> HADOOP-12122-HADOOP-11890.5.patch, HADOOP-12122-HADOOP-11890.6.patch, 
> HADOOP-12122-HADOOP-11890.7.patch, HADOOP-12122-HADOOP-11890.8.patch, 
> HADOOP-12122-HADOOP-11890.9.patch, HADOOP-12122-HADOOP-12122.2.patch, 
> HADOOP-12122-HADOOP-12122.3.patch, HADOOP-12122.0.patch, 
> lets_blow_up_a_lot_of_tests.patch
>
>
> There are a fairly extensive number of locations found via code inspection 
> which use unsafe methods of handling addresses in a dual-stack or IPv6-only 
> world:
> - splits on the first ":" assuming that delimits a host from a port
> - produces a host port pair by appending :port blindly (Java prefers 
> [ipv6]:port which is the standard for IPv6 URIs)
> - depends on the behavior of InetSocketAddress.toString() which produces the 
> above.
> This patch fixes those metaphors that I can find above, and replaces calls to 
> InetSocketAddress.toString() with a wrapper that properly brackets the IPv6 
> address if there is one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12418:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews, Steve. I've committed this to trunk and branch-2.

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12495:
--
Hadoop Flags: Incompatible change
Release Note: When Hadoop JVMs create other processes on OS X, it will 
always use posix_spawn.

> Fix posix_spawn error on OS X
> -
>
> Key: HADOOP-12495
> URL: https://issues.apache.org/jira/browse/HADOOP-12495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0
> Environment: OS X, JDK 1.7.0_67
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12495.00.patch
>
>
> OS X JDK has issues with localization that can cause util.Shell.run to fail.  
> This is fixed in JDK9, but JDK7 and JDK8 are still broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12495:
--
Environment: OS X, JDK 1.7.0_67

> Fix posix_spawn error on OS X
> -
>
> Key: HADOOP-12495
> URL: https://issues.apache.org/jira/browse/HADOOP-12495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0
> Environment: OS X, JDK 1.7.0_67
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12495.00.patch
>
>
> OS X JDK has issues with localization that can cause util.Shell.run to fail.  
> This is fixed in JDK9, but JDK7 and JDK8 are still broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12495:
--
Attachment: HADOOP-12495.00.patch

-00:
* force the launchMechanism to be POSIX_SPAWN to avoid localization issues.

> Fix posix_spawn error on OS X
> -
>
> Key: HADOOP-12495
> URL: https://issues.apache.org/jira/browse/HADOOP-12495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12495.00.patch
>
>
> OS X JDK has issues with localization that can cause util.Shell.run to fail.  
> This is fixed in JDK9, but JDK7 and JDK8 are still broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12495:
--
Status: Patch Available  (was: Open)

> Fix posix_spawn error on OS X
> -
>
> Key: HADOOP-12495
> URL: https://issues.apache.org/jira/browse/HADOOP-12495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12495.00.patch
>
>
> OS X JDK has issues with localization that can cause util.Shell.run to fail.  
> This is fixed in JDK9, but JDK7 and JDK8 are still broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12495) Fix posix_spawn error on OS X

2015-10-20 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12495:
-

 Summary: Fix posix_spawn error on OS X
 Key: HADOOP-12495
 URL: https://issues.apache.org/jira/browse/HADOOP-12495
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


OS X JDK has issues with localization that can cause util.Shell.run to fail.  
This is fixed in JDK9, but JDK7 and JDK8 are still broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2015-10-20 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--
Assignee: (was: Anthony Rojas)

> DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
> native-hadoop with error: java.lang.UnsatisfiedLinkError
> -
>
> Key: HADOOP-8884
> URL: https://issues.apache.org/jira/browse/HADOOP-8884
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.0.1-alpha
>Reporter: Anthony Rojas
> Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
> HADOOP-8884.patch
>
>
> Recommending to change the following debug message and promote it to a 
> warning instead:
> 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
> with error: java.lang.UnsatisfiedLinkError: 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
> `GLIBC_2.6' not found (required by 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2015-10-20 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--
Target Version/s:   (was: 2.0.3-alpha)

> DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
> native-hadoop with error: java.lang.UnsatisfiedLinkError
> -
>
> Key: HADOOP-8884
> URL: https://issues.apache.org/jira/browse/HADOOP-8884
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.0.1-alpha
>Reporter: Anthony Rojas
> Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
> HADOOP-8884.patch
>
>
> Recommending to change the following debug message and promote it to a 
> warning instead:
> 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
> with error: java.lang.UnsatisfiedLinkError: 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
> `GLIBC_2.6' not found (required by 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2015-10-20 Thread Anthony Rojas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965657#comment-14965657
 ] 

Anthony Rojas commented on HADOOP-8884:
---

I believe it's still a useful feature to have a warning message when native 
libraries are enabled but not available, although there's some additional work 
needed to properly implement this.

> DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
> native-hadoop with error: java.lang.UnsatisfiedLinkError
> -
>
> Key: HADOOP-8884
> URL: https://issues.apache.org/jira/browse/HADOOP-8884
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.0.1-alpha
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
> Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
> HADOOP-8884.patch
>
>
> Recommending to change the following debug message and promote it to a 
> warning instead:
> 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
> with error: java.lang.UnsatisfiedLinkError: 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
> `GLIBC_2.6' not found (required by 
> /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965638#comment-14965638
 ] 

Elliott Clark commented on HADOOP-12491:


Pushed to the branch. Thanks for the review [~ste...@apache.org]

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965622#comment-14965622
 ] 

Steve Loughran commented on HADOOP-12491:
-

+1 for me then. I was worried about the reference to DNS in one of the tests, 
but Nemanja says that's OK

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965609#comment-14965609
 ] 

Elliott Clark commented on HADOOP-12491:


+1 on this. Ran the tests and everything looks good.
This doesn't change the defaults on flags that Hadoop runs with.

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965577#comment-14965577
 ] 

Duo Xu commented on HADOOP-11685:
-

[~linchan]  [~cnauroth] Could you take a look at this patch?

The function storeEmptyFolder basically tries to create an empty blob with 
folder property. So if we get this exception "there is a lease on the blob, 
", which implicitly means the blob has been already created, we do not 
throw this exception and simply return. 

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translat

[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965573#comment-14965573
 ] 

Steve Loughran commented on HADOOP-12418:
-

LGTM

+1

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11880) aw jira sub-task testing, ignore

2015-10-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965535#comment-14965535
 ] 

Allen Wittenauer commented on HADOOP-11880:
---

PIng [~cnauroth], who will be most interested in this comment.  Although it 
should probably be copied over to HDFS-9263. :)

I'm a bit hesitant to enable Yetus for HDFS, etc, until some of this gets 
cleaned up. :(

> aw jira sub-task testing, ignore
> 
>
> Key: HADOOP-11880
> URL: https://issues.apache.org/jira/browse/HADOOP-11880
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
> Attachments: HDFS-9263-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12390) Enhance FsShell file put to support selectively preserving individual file attributes.

2015-10-20 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-12390.

Resolution: Duplicate

[~jagadesh.kiran], yes, thank you for the reminder.  I'm resolving this as 
duplicate now.

> Enhance FsShell file put to support selectively preserving individual file 
> attributes.
> --
>
> Key: HADOOP-12390
> URL: https://issues.apache.org/jira/browse/HADOOP-12390
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>Assignee: Jagadesh Kiran N
>
> {{hadoop fs -put}} currently supports the {{-p}} option.  When used, this 
> option preserves the source file's access time, modification time, ownership 
> and mode.  If the destination is HDFS, then this effectively means HDFS must 
> be configured to use modification time.  If the HDFS deployment chooses to 
> disable access time by setting {{dfs.namenode.accesstime.precision}} to 0, 
> then attempts to use the {{-p}} flag all fail with "Access time for hdfs is 
> not configured."  This issue proposes to introduce separate options for 
> preserving just ownership, just mode, or just times.  For backwards 
> compatibility, the behavior of a bare {{-p}} must continue to be preserving 
> all 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965518#comment-14965518
 ] 

Hadoop QA commented on HADOOP-11685:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767637/HADOOP-11685.03.patch 
|
| JIRA Issue | HADOOP-11685 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 3ab0dbd134da 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ee2a191/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 6381ddc |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java

[jira] [Created] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-20 Thread HeeSoo Kim (JIRA)
HeeSoo Kim created HADOOP-12494:
---

 Summary: fetchdt stores the token based on token kind instead of 
token service
 Key: HADOOP-12494
 URL: https://issues.apache.org/jira/browse/HADOOP-12494
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: HeeSoo Kim


The fetchdt command stores the token in a file. However, the key of token is a 
token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.03.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(C

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitB

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.ja

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: (was: HADOOP-11685.03.patch)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.j

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.03.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(C

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitB

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitB

[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965427#comment-14965427
 ] 

Hadoop QA commented on HADOOP-11685:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 8s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-tools/hadoop-azure (total was 33, now 34). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767625/HADOOP-11685.02.patch 
|
| JIRA Issue | HADOOP-11685 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 48743b4cc40c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ee2a191/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 71e53

[jira] [Updated] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12418:

Target Version/s: 2.8.0

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.02.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.java:248)
> 

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.ja

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.ja

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.02.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.java:248)
>   at 
> com.microso

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-20 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: (was: HADOOP-11685.02.patch)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.java:248)
>   at 
> 

[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965388#comment-14965388
 ] 

Hadoop QA commented on HADOOP-12418:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 18s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 22s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767611/HADOOP-12418.v2.patch 
|
| JIRA Issue | HADOOP-12418 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 0583cc7a58e2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ee2a191/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 9cb5d35 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64

[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-10-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965370#comment-14965370
 ] 

Daryn Sharp commented on HADOOP-11628:
--

bq.  +1, ideally with a backport to 2.7.x branch

Thanks Steve!  Patch seems to apply cleanly to branch-2.7 (but I'm infamous for 
screwing up git).  Did you encounter a problem?

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>  Labels: jdk8
> Fix For: 2.8.0
>
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12390) Enhance FsShell file put to support selectively preserving individual file attributes.

2015-10-20 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965356#comment-14965356
 ] 

Jagadesh Kiran N commented on HADOOP-12390:
---

hi, [~cnauroth] ,as the HDFS-9208 is resolved, we can close this right ?

> Enhance FsShell file put to support selectively preserving individual file 
> attributes.
> --
>
> Key: HADOOP-12390
> URL: https://issues.apache.org/jira/browse/HADOOP-12390
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>Assignee: Jagadesh Kiran N
>
> {{hadoop fs -put}} currently supports the {{-p}} option.  When used, this 
> option preserves the source file's access time, modification time, ownership 
> and mode.  If the destination is HDFS, then this effectively means HDFS must 
> be configured to use modification time.  If the HDFS deployment chooses to 
> disable access time by setting {{dfs.namenode.accesstime.precision}} to 0, 
> then attempts to use the {{-p}} flag all fail with "Access time for hdfs is 
> not configured."  This issue proposes to introduce separate options for 
> preserving just ownership, just mode, or just times.  For backwards 
> compatibility, the behavior of a bare {{-p}} must continue to be preserving 
> all 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12418:

Attachment: HADOOP-12418.v2.patch

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch, HADOOP-12418.v2.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965165#comment-14965165
 ] 

Nemanja Matkovic commented on HADOOP-12491:
---

Thanks for questions, these are the answers:
  2. Yup, my change only alters comment that says "we don't support IPv6", but 
keeps IPv4 as default. [~eclark] will tackle how to keep running IPv6 
configuration in HADOOP-11630
  3. New DNS tests will work in offline mode as I'm testing only parsing on 
IPv6 address into nibbles, not actually querying DNS server for it (possible 
test miss, but seems to me that querying DNS would introduce more flakiness 
that benefit?)
  4. I did a test run on dual stack machine where all tests have 
"preferIPv4Stack=true" (AFAIK that is the same as IPv4 only machine, it is 
going only through IPv4 codepaths) - nothing happens, tests run and pass.

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965114#comment-14965114
 ] 

Steve Loughran commented on HADOOP-12418:
-

bq. I could leave it as is and add an explicit check for 
InterruptedIOException, if you think that's cleaner.

yes please

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12418) TestRPC.testRPCInterruptedSimple fails intermittently

2015-10-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12418:

Summary: TestRPC.testRPCInterruptedSimple fails intermittently  (was: 
failure of TestRPC.testRPCInterruptedSimple)

> TestRPC.testRPCInterruptedSimple fails intermittently
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12418) failure of TestRPC.testRPCInterruptedSimple

2015-10-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12418:

Summary: failure of TestRPC.testRPCInterruptedSimple  (was: failure of 
TestRPC.testRPCInterruptedSimple on java 8 jenkins)

> failure of TestRPC.testRPCInterruptedSimple
> ---
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) failure of TestRPC.testRPCInterruptedSimple on java 8 jenkins

2015-10-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965100#comment-14965100
 ] 

Kihwal Lee commented on HADOOP-12418:
-

bq. Would it possible just to intercept InterruptedIOException rather than scan 
the string?
{{call()}} can throw either an {{InterruptedIOException}} or 
{{InterruptedException}} wrapped in {{IOException}}. The string scan was 
originally there for the latter. I could leave it as is and add an explicit 
check for {{InterruptedIOException}}, if you think that's cleaner.

> failure of TestRPC.testRPCInterruptedSimple on java 8 jenkins
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11624) Prevent fail-over during client shutdown

2015-10-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965074#comment-14965074
 ] 

Kihwal Lee commented on HADOOP-11624:
-

Sorry, I apparently filed it twice. :)

> Prevent fail-over during client shutdown
> 
>
> Key: HADOOP-11624
> URL: https://issues.apache.org/jira/browse/HADOOP-11624
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Critical
>
> We've seen a HBase RS hanging during shutdown. It turns out the ipc client 
> was throwing {{java.nio.channels.ClosedByInterruptException}} during the 
> shutdown. Then the HA failover retry logic then determined to failover and 
> retry. If an interrupt was received as part of the user shutting down the 
> client, failover should not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964980#comment-14964980
 ] 

Hadoop QA commented on HADOOP-12053:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 23s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.ipc.TestIPC |
|   | hadoop.metrics2.impl.TestMetricsSystemImpl |
|   | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12742021/HADOOP-12053.003.patch
 |
| JIRA Issue | HADOOP-12053 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  uni

[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-10-20 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964937#comment-14964937
 ] 

Brahma Reddy Battula commented on HADOOP-12053:
---

[~jira.shegalov] thanks for working on this issue..Patch LGTM...I verified this 
fix,it's working fine. Since HADOOP-12304 went to branch-2.7, this should be 
merge to branch-2.7..[~cnauroth] can you please take look at this issue..

> Harfs defaulturiport should be Zero ( should not -1)
> 
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Gera Shegalov
>Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, 
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>   String[] edges = file.getPath().getName().split("-");
>   if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
> remoteDirSet.add(remoteAppDir);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-10-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12053:
--
Target Version/s: 2.8.0, 2.7.2  (was: 2.8.0)

> Harfs defaulturiport should be Zero ( should not -1)
> 
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Gera Shegalov
>Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, 
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>   String[] edges = file.getPath().getName().split("-");
>   if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
> remoteDirSet.add(remoteAppDir);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-10-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12053:
--
Priority: Critical  (was: Major)

> Harfs defaulturiport should be Zero ( should not -1)
> 
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Gera Shegalov
>Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, 
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>   String[] edges = file.getPath().getName().split("-");
>   if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
> remoteDirSet.add(remoteAppDir);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12487) DomainSocket.close() assumes incorrect Linux behaviour

2015-10-20 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964912#comment-14964912
 ] 

Alan Burlison commented on HADOOP-12487:


External discussion on the issue: 
http://www.spinics.net/lists/netdev/msg348757.html

> DomainSocket.close() assumes incorrect Linux behaviour
> --
>
> Key: HADOOP-12487
> URL: https://issues.apache.org/jira/browse/HADOOP-12487
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Linux Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: shutdown.c
>
>
> I'm getting a test failure in TestDomainSocket.java, in the 
> testSocketAcceptAndClose test. That test creates a socket which one thread 
> waits on in DomainSocket.accept() whilst a second thread sleeps for a short 
> time before closing the same socket with DomainSocket.close().
> DomainSocket.close() first calls shutdown0() on the socket before closing 
> close0() - both those are thin wrappers around the corresponding libc socket 
> calls. DomainSocket.close() contains the following comment, explaining the 
> logic involved:
> {code}
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
> {code}
> Unfortunately that relies on non-standards-compliant Linux behaviour. I've 
> written a simple C test case that replicates the scenario above:
> # ThreadA opens, binds, listens and accepts on a socket, waiting for 
> connections.
> # Some time later ThreadB calls shutdown on the socket ThreadA is waiting in 
> accept on.
> Here is what happens:
> On Linux, the shutdown call in ThreadB succeeds and the accept call in 
> ThreadA returns with EINVAL.
> On Solaris, the shutdown call in ThreadB fails and returns ENOTCONN. ThreadA 
> continues to wait in accept.
> Relevant POSIX manpages:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/accept.html
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/shutdown.html
> The POSIX shutdown manpage says:
> "The shutdown() function shall cause all or part of a full-duplex connection 
> on the socket associated with the file descriptor socket to be shut down."
> ...
> "\[ENOTCONN] The socket is not connected."
> Page 229 & 303 of "UNIX System V Network Programming" say:
> "shutdown can only be called on sockets that have been previously connected"
> "The socket \[passed to accept that] fd refers to does not participate in the 
> connection. It remains available to receive further connect indications"
> That is pretty clear, sockets being waited on with accept are not connected 
> by definition. Nor is it the accept socket connected when a client connects 
> to it, it is the socket returned by accept that is connected to the client. 
> Therefore the Solaris behaviour of failing the shutdown call is correct.
> In order to get the required behaviour of ThreadB causing ThreadA to exit the 
> accept call with an error, the correct way is for ThreadB to call close on 
> the socket that ThreadA is waiting on in accept.
> On Solaris, calling close in ThreadB succeeds, and the accept call in ThreadA 
> fails and returns EBADF.
> On Linux, calling close in ThreadB succeeds but ThreadA continues to wait in 
> accept until there is an incoming connection. That accept returns 
> successfully. However subsequent accept calls on the same socket return EBADF.
> The Linux behaviour is fundamentally broken in three places:
> # Allowing shutdown to succeed on an unconnected socket is incorrect.  
> # Returning a successful accept on a closed file descriptor is incorrect, 
> especially as future accept calls on the same socket fail.
> # Once shutdown has been called on the socket, calling close on the socket 
> fails with EBADF. That is incorrect, shutdown should just prevent further IO 
> on the socket, it should not close it.
> The real issue though is that there's no single way of doing this that works 
> on both Solaris and Linux, there will need to be platform-specific code in 
> Hadoop to cater for the Linux brokenness. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11880) aw jira sub-task testing, ignore

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964845#comment-14964845
 ] 

Steve Loughran commented on HADOOP-11880:
-

Looking at the tests, turns out some tests do expect hard-coded paths for 
minidfs cluster, so the cluster comes back up in the same run
{code}
2015-10-20 03:54:11,141 [main] WARN  namenode.FSNamesystem 
(FSNamesystem.java:loadFromDisk(682)) - Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/TzpG5hegiz/name-0-1
 is in an inconsistent state: storage directory does not exist or is not 
accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:323)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:211)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:973)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:680)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:571)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:628)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
at org.apache.hadoop.hdfs.TestSetTimes.testTimes(TestSetTimes.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

this is going to be fun. Either revert with a hard-coded path and expect 
parallel tests to fail, or extend the dfs builder to add an operation to set 
the subdir, which would be retained over a test case/test suite. The revert is 
the short-term option, but mini dfs will need to be fixed for reliable hdfs 
test runs

> aw jira sub-task testing, ignore
> 
>
> Key: HADOOP-11880
> URL: https://issues.apache.org/jira/browse/HADOOP-11880
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
> Attachments: HDFS-9263-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12418) failure of TestRPC.testRPCInterruptedSimple on java 8 jenkins

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964805#comment-14964805
 ] 

Steve Loughran commented on HADOOP-12418:
-

Bit of an ugly test. Would it possible just to intercept InterruptedIOException 
rather than scan the string?

> failure of TestRPC.testRPCInterruptedSimple on java 8 jenkins
> -
>
> Key: HADOOP-12418
> URL: https://issues.apache.org/jira/browse/HADOOP-12418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins, Java 8
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-12418.patch
>
>
> Jenkins trunk + java 8 saw a failure of  
> {{TestRPC.testRPCInterruptedSimple}}; the interrupt wasn't picked up. Race in 
> test -or a surfacing of a bug in RPC where at some points interrupt 
> exceptions are not picked up?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964810#comment-14964810
 ] 

Steve Loughran commented on HADOOP-12491:
-

# ignore the bats failures; related to the new patch runner
# as I said before, we need to still allow ipv4 to be mandated, and keeping 
that as default ensures no surprises. For IPv6 we need to add the way to turn 
this off in maven and the scripts. [~aw] and [~cnauroth] are probably the best 
place to review the scripts.
# do a test run on a machine that's offline. Do the new rDNS tests work?
# then do a test run on a machine/VM without IPv6. What happens then?

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964802#comment-14964802
 ] 

Steve Loughran commented on HADOOP-12415:
-

+1 from me. Kos -do you want to do the commit?

> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin Boudnik
>Assignee: Tom Zeng
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11624) Prevent fail-over during client shutdown

2015-10-20 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su resolved HADOOP-11624.

Resolution: Duplicate

Looks like fixed by HADOOP-12464.

> Prevent fail-over during client shutdown
> 
>
> Key: HADOOP-11624
> URL: https://issues.apache.org/jira/browse/HADOOP-11624
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Critical
>
> We've seen a HBase RS hanging during shutdown. It turns out the ipc client 
> was throwing {{java.nio.channels.ClosedByInterruptException}} during the 
> shutdown. Then the HA failover retry logic then determined to failover and 
> retry. If an interrupt was received as part of the user shutting down the 
> client, failover should not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)