[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-07-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Attachment: HADOOP-14626-testing.02.patch

testing.02: Removed hard-coded version in hadoop-client-check-invariants module.
(this patch is not ready for commit)

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, HADOOP-14626.testing.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14676) Wrong default value for "fs.df.interval"

2017-07-27 Thread xiangguang zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangguang zheng reassigned HADOOP-14676:
-

Assignee: xiangguang zheng  (was: Erik Krogen)

> Wrong default value for "fs.df.interval"
> 
>
> Key: HADOOP-14676
> URL: https://issues.apache.org/jira/browse/HADOOP-14676
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, conf, fs
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: xiangguang zheng
>
> According to {{core-default.xml}} the default value of {{fs.df.interval = 60 
> sec}}. But the implementation of {{DF}} substitutes 3 sec instead. The 
> problem is that {{DF}} uses outdated constant {{DF.DF_INTERVAL_DEFAULT}} 
> instead of the correct one 
> {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104365#comment-16104365
 ] 

Bharat Viswanadham edited comment on HADOOP-14672 at 7/28/17 2:51 AM:
--

[~busbey] 
[INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in the 
shaded jar. This includes fsimage xml tool, which is part of 
hadoop-client-minicluster jar.


The xerces:xercesImpl is added for fsimage xml tool.
Jira which added: HDFS-4629. Using com.sun.org.apache.xml.internal.serialize.* 
in XmlEditsVisitor.java is JVM vendor specific. Breaks IBM JAVA.

It is only used by fsimage xml offline tool from my understanding.



was (Author: bharatviswa):
[~busbey] 
[INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in the 
shaded jar.
Which includes fsimage xml tool. 


The xerces:xercesImpl is added for fsimage xml tool.
Jira which added: HDFS-4629. Using com.sun.org.apache.xml.internal.serialize.* 
in XmlEditsVisitor.java is JVM vendor specific. Breaks IBM JAVA.


> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-07-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104367#comment-16104367
 ] 

Bharat Viswanadham edited comment on HADOOP-14685 at 7/28/17 2:56 AM:
--

[~busbey]

The testjar and testshell package classes are from 
hadoop-mapreduce-client-jobclient:test-jar.
So, if you are saying this artifact is to include test jars, then above jars 
should be included in this artifact right?

Let me know if i am missing something here.


was (Author: bharatviswa):
[~busbey]

the testjar package classes are from hadoop-mapreduce-client-jobclient:test-jar.
So, if you are saying this artifact is to include test jars, then above jars 
should be included in this artifact right?

Let me know if i am missing something here.

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including 

[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-07-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104367#comment-16104367
 ] 

Bharat Viswanadham commented on HADOOP-14685:
-

[~busbey]

the testjar package classes are from hadoop-mapreduce-client-jobclient:test-jar.
So, if you are saying this artifact is to include test jars, then above jars 
should be included in this artifact right?

Let me know if i am missing something here.

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> 

[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-27 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104365#comment-16104365
 ] 

Bharat Viswanadham commented on HADOOP-14672:
-

[~busbey] 
[INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in the 
shaded jar.
Which includes fsimage xml tool. 


The xerces:xercesImpl is added for fsimage xml tool.
Jira which added: HDFS-4629. Using com.sun.org.apache.xml.internal.serialize.* 
in XmlEditsVisitor.java is JVM vendor specific. Breaks IBM JAVA.


> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104363#comment-16104363
 ] 

Hadoop QA commented on HADOOP-14313:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 54s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.security.TestGroupsCaching |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14313 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867062/HADOOP-14313.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97a6f86ef168 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6330f2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12880/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12880/testReport/ |
| modules | 

[jira] [Comment Edited] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104352#comment-16104352
 ] 

Sean Busbey edited comment on HADOOP-14686 at 7/28/17 2:33 AM:
---

{quote}
Hope you are not saying you revoked Yetus support for the Hadoop stable 
release. By not integrating HADOOP-11746 into branch-2?
So what is your recommendations for restoring branch-2 build, as it worked 2 
weeks ago?
{quote}

I don't know how Allen could have possibly done this. He's just saying he's not 
willing to spend his time as a volunteer working on a particular branch, a 
position he's held consistently about branch-2 derivatives for a long time. 
Frankly, I was pretty surprised when he was willing to spend time on this JIRA 
at all. HADOOP-11746 is entirely irrelevant to our current issue. The result of 
it was a version of the precommit testing that we've since replaced in Hadoop 
like 2 or 3 time over.

I _am_ willing to spend my time as a volunteer chasing this down, because I 
personally try to push folks towards the 2.7.z releases. I only ask that you 
try to be a little more patient in your phrasing. I tend to be bursty in Hadoop 
because my attention is often elsewhere, so I have no way to know what the 
condition of the ASF build server was when you observed things working on 
branch-2.7. I can help chase down specific failures, but a different JIRA that 
tracks examples will be more helpful than an extended conversation on an issue 
we've already solved.


was (Author: busbey):
{quote}
Hope you are not saying you revoked Yetus support for the Hadoop stable 
release. By not integrating HADOOP-11746 into branch-2?
So what is your recommendations for restoring branch-2 build, as it worked 2 
weeks ago?
{quote}

I don't know how Allen could have possibly done this. He's just saying he's not 
willing to spend his time as a volunteer working on a particular branch, a 
position he's held consistently about branch-2 derivatives for a long time. 
Frankly, I was pretty surprised when he was willing to spend time on this JIRA 
at all. HADOOP-11746 is entirely irrelevant to our current issue. The result of 
it was a version of the precommit testing that we've sense replaced in Hadoop 
like 2 or 3 time over.

I _am_ willing to spend my time as a volunteer chasing this down, because I 
personally try to push folks towards the 2.7.z releases. I only ask that you 
try to be a little more patient in your phrasing. I tend to be bursty in Hadoop 
because my attention is often elsewhere, so I have no way to know what the 
condition of the ASF build server was when you observed things working on 
branch-2.7. I can help chase down specific failures, but a different JIRA that 
tracks examples will be more helpful than an extended conversation on an issue 
we've already solved.

> Branch-2.7 .gitignore is out of date
> 
>
> Key: HADOOP-14686
> URL: https://issues.apache.org/jira/browse/HADOOP-14686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit
>Affects Versions: 2.7.4
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-14686-branch-2.7.v0.patch, 
> HADOOP-14686-branch-2.7.v1.patch
>
>
> .gitignore is out of date on branch-2.7, which is causing issues in precommit 
> checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104352#comment-16104352
 ] 

Sean Busbey commented on HADOOP-14686:
--

{quote}
Hope you are not saying you revoked Yetus support for the Hadoop stable 
release. By not integrating HADOOP-11746 into branch-2?
So what is your recommendations for restoring branch-2 build, as it worked 2 
weeks ago?
{quote}

I don't know how Allen could have possibly done this. He's just saying he's not 
willing to spend his time as a volunteer working on a particular branch, a 
position he's held consistently about branch-2 derivatives for a long time. 
Frankly, I was pretty surprised when he was willing to spend time on this JIRA 
at all. HADOOP-11746 is entirely irrelevant to our current issue. The result of 
it was a version of the precommit testing that we've sense replaced in Hadoop 
like 2 or 3 time over.

I _am_ willing to spend my time as a volunteer chasing this down, because I 
personally try to push folks towards the 2.7.z releases. I only ask that you 
try to be a little more patient in your phrasing. I tend to be bursty in Hadoop 
because my attention is often elsewhere, so I have no way to know what the 
condition of the ASF build server was when you observed things working on 
branch-2.7. I can help chase down specific failures, but a different JIRA that 
tracks examples will be more helpful than an extended conversation on an issue 
we've already solved.

> Branch-2.7 .gitignore is out of date
> 
>
> Key: HADOOP-14686
> URL: https://issues.apache.org/jira/browse/HADOOP-14686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit
>Affects Versions: 2.7.4
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-14686-branch-2.7.v0.patch, 
> HADOOP-14686-branch-2.7.v1.patch
>
>
> .gitignore is out of date on branch-2.7, which is causing issues in precommit 
> checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-07-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104329#comment-16104329
 ] 

Sean Busbey commented on HADOOP-14685:
--

the test-jar artifacts aren't the problem. We probably need to include them, as 
that's where the minicluster classes are defined. (and those are why we create 
this artifact in the first place).

What we don't need are these things that got flagged on HADOOP-14089:

{code}

testjar/
testjar/ClassWordCount$MapClass.class
testjar/ClassWordCount$Reduce.class
testjar/ClassWordCount.class
testjar/CustomOutputCommitter.class
testjar/ExternalIdentityReducer.class
testjar/ExternalMapperReducer.class
testjar/ExternalWritable.class
testjar/JobKillCommitter$CommitterWithFailCleanup.class
testjar/JobKillCommitter$CommitterWithFailSetup.class
testjar/JobKillCommitter$CommitterWithNoError.class
testjar/JobKillCommitter$MapperFail.class
testjar/JobKillCommitter$MapperPass.class
testjar/JobKillCommitter$MapperPassSleep.class
testjar/JobKillCommitter$ReducerFail.class
testjar/JobKillCommitter$ReducerPass.class
testjar/JobKillCommitter.class
testjar/UserNamePermission$UserNameMapper.class
testjar/UserNamePermission$UserNameReducer.class
testjar/UserNamePermission.class
testshell/
testshell/ExternalMapReduce$MapClass.class
testshell/ExternalMapReduce$Reduce.class
testshell/ExternalMapReduce.class
.options
jdtCompilerAdapter.jar
{code}

I haven't chased down where these come from, but we should filter out just them.

We probably also ought to exclude unneeded properties files that come out of 
the test jars too, most importantly the log4j.properties file since that should 
be up to downstream.

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in 

[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104325#comment-16104325
 ] 

Sean Busbey commented on HADOOP-14672:
--

{quote}
Ideally, it is true that these test jars are not marked as public or 
LimitedPrivate so shouldn't be used by downstream projects. However, it would 
be great if we can check those main stream projects, such as HBase, Hive, etc. 
won't use them at all. Otherwise, some related tests for downstream projects 
could get break. I like the idea to separate it into a dedicated JIRA for 
additional discussion and verification. Sean Busbey, what do you think?
{quote}

I think it's broken that we include them and we should be using the new 
client-facing jars in our new major version to push against prior broken 
behavior whenever we can. I'm fine with doing this in a follow on, and will 
take my concerns over to HADOOP-14685.

{quote}
Xerces sounds like a complicated issue across different JVMs. Can we just leave 
it there or treat it as normal third party classes? Any side-effect if we shade 
Xerces classes as third party classes?
{quote}

{quote}
According to my understanding shading Xerces classes, will not cause issue. It 
will work across different JVM's with out any issue.
{quote}

Xerces can be difficult, because it provides an alternate implementation for 
the basic XML building blocks that come in the JVM. If we're including it 
relocated, we'll either a) do it wrong and cause the JVM to break when folks 
try to use the built in xml related classes or b) make diagnosing a problem 
super hard when we correctly load an alternate xml parsing implementation.

If we're actually using it, that's fine; let's relocate and bundle it. But 
let's be sure we're actually using it. I didn't see anyone chase down if we 
need it beyond the fsimage xml tool thing, which isn't even a part of what 
we're trying to provide with this artifact.

> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104282#comment-16104282
 ] 

Hadoop QA commented on HADOOP-14397:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  9s{color} | {color:orange} root: The patch generated 7 new + 116 unchanged 
- 2 fixed = 123 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.fs.TestLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14397 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879265/HADOOP-14397.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c5391dacc474 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3c7300 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12875/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| mvninstall | 

[jira] [Commented] (HADOOP-11957) if an IOException error is thrown in DomainSocket.close we go into infinite loop.

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104265#comment-16104265
 ] 

Hadoop QA commented on HADOOP-11957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 91 unchanged - 0 fixed = 92 total (was 91) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.unix.TestDomainSocket |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-11957 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12732085/HADOOP-11957.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cdd2552258f4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6330f2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12878/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12878/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12878/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12878/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> if an IOException error is thrown in DomainSocket.close we go into infinite 
> loop.
> 

[jira] [Commented] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104258#comment-16104258
 ] 

Hadoop QA commented on HADOOP-14388:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866612/HADOOP-14388.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 149f07712668 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6330f2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12879/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12879/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug

[jira] [Commented] (HADOOP-11875) [JDK9] Add a second copy of Hamlet without _ as a one-character identifier

2017-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104218#comment-16104218
 ] 

Hudson commented on HADOOP-11875:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12063/])
HADOOP-11875. [JDK9] Adding a second copy of Hamlet without _ as a (aajisaka: 
rev 38c6fa5c7a61c7f6d4d2db5f12f9c60d477fb397)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/SingleCounterBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletSpec.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/Hamlet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/ErrorsAndWarningsBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NodePage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/ContainerBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestInfoBlock.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTasksPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppAttemptPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/LipsumBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/ErrorPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AboutPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/InfoBlock.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsLogsPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUtils.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/CountersBlock.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsView.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSView.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/CountersPage.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/JobPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMView.java
* 

[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-07-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Attachment: HADOOP-14626.testing.patch

Test patch to use 3.0.0-M1 (now release vote is in progress).
You can apply it and try mvn command with -Pstaged-releases option.

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
> Attachments: HADOOP-14626.testing.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11875) [JDK9] Add a second copy of Hamlet without _ as a one-character identifier

2017-07-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-11875:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~chris.douglas] and [~ste...@apache.org] for 
reviewing this.

> [JDK9] Add a second copy of Hamlet without _ as a one-character identifier
> --
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira Ajisaka
>  Labels: webapp
> Fix For: 3.0.0-beta1
>
> Attachments: build_error_dump.txt, HADOOP-11875.01.patch, 
> HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, 
> HADOOP-11875.05.patch, HADOOP-11875.06.patch, HADOOP-11875.07.patch, 
> HADOOP-11875.10.patch, HADOOP-11875.11.patch
>
>
> From JDK9, _ as a one-character identifier is banned. Currently Web UI 
> (Hamlet) uses it. We should fix them to compile with JDK9. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11875) [JDK9] Add a second copy of Hamlet without _ as a one-character identifier

2017-07-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-11875:
---
Release Note: Added org.apache.hadoop.yarn.webapp.hamlet2 package without _ 
as a one-character identifier. Please use this package instead of 
org.apache.hadoop.yarn.webapp.hamlet.
 Summary: [JDK9] Add a second copy of Hamlet without _ as a 
one-character identifier  (was: [JDK9] Renaming _ as a one-character identifier 
to another identifier)
 Description: From JDK9, _ as a one-character identifier is banned. 
Currently Web UI (Hamlet) uses it. We should fix them to compile with JDK9. 
https://bugs.openjdk.java.net/browse/JDK-8061549  (was: From JDK9, _ as a 
one-character identifier is banned. Currently Web UI uses it. We should fix 
them to compile with JDK9. https://bugs.openjdk.java.net/browse/JDK-8061549)

> [JDK9] Add a second copy of Hamlet without _ as a one-character identifier
> --
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira Ajisaka
>  Labels: webapp
> Attachments: build_error_dump.txt, HADOOP-11875.01.patch, 
> HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, 
> HADOOP-11875.05.patch, HADOOP-11875.06.patch, HADOOP-11875.07.patch, 
> HADOOP-11875.10.patch, HADOOP-11875.11.patch
>
>
> From JDK9, _ as a one-character identifier is banned. Currently Web UI 
> (Hamlet) uses it. We should fix them to compile with JDK9. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104159#comment-16104159
 ] 

Hadoop QA commented on HADOOP-13672:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-13672 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831371/HADOOP-13672.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12876/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch, HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104158#comment-16104158
 ] 

Hadoop QA commented on HADOOP-13055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-13055 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835046/HADOOP-13055.04.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12877/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier

2017-07-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-11875:
---
Hadoop Flags:   (was: Incompatible change)

> [JDK9] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira Ajisaka
>  Labels: webapp
> Attachments: build_error_dump.txt, HADOOP-11875.01.patch, 
> HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, 
> HADOOP-11875.05.patch, HADOOP-11875.06.patch, HADOOP-11875.07.patch, 
> HADOOP-11875.10.patch, HADOOP-11875.11.patch
>
>
> From JDK9, _ as a one-character identifier is banned. Currently Web UI uses 
> it. We should fix them to compile with JDK9. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.002.patch

Thanks for the catch, [~manojg].  Fixed in the 02 patch.

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14502) Confusion/name conflict between NameNodeActivity#BlockReportNumOps and RpcDetailedActivity#BlockReportNumOps

2017-07-27 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14502:
-
Release Note: Remove the BlockReport(NumOps,AvgTime) metrics emitted under 
the NameNodeActivity context in favor of StorageBlockReport(NumOps,AvgTime) 
which more accurately represent the metric. Same for the corresponding quantile 
metrics.

> Confusion/name conflict between NameNodeActivity#BlockReportNumOps and 
> RpcDetailedActivity#BlockReportNumOps
> 
>
> Key: HADOOP-14502
> URL: https://issues.apache.org/jira/browse/HADOOP-14502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
>  Labels: Incompatible
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14502.000.patch, HADOOP-14502.001.patch, 
> HADOOP-14502.002.patch
>
>
> Currently the {{BlockReport(NumOps|AvgTime)}} metrics emitted under the 
> {{RpcDetailedActivity}} context and those emitted under the 
> {{NameNodeActivity}} context are actually reporting different things despite 
> having the same name. {{NameNodeActivity}} reports the count/time of _per 
> storage_ block reports, whereas {{RpcDetailedActivity}} reports the 
> count/time of _per datanode_ block reports. This makes for a confusing 
> experience with two metrics having the same name reporting different values. 
> We already have the {{StorageBlockReportsOps}} metric under 
> {{NameNodeActivity}}. Can we make {{StorageBlockReport}} a {{MutableRate}} 
> metric and remove {{NameNodeActivity#BlockReport}} metric? Open to other 
> suggestions about how to address this as well. The 3.0 release seems a good 
> time to make this incompatible change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103957#comment-16103957
 ] 

Allen Wittenauer commented on HADOOP-14686:
---

At this point, it's obvious that the conversation is no longer productive or 
constructive. I'll be removing myself as a watcher.

BTW, you're very welcome on getting a branch that was never designed to work 
with Apache Yetus haphazardly working again.  

> Branch-2.7 .gitignore is out of date
> 
>
> Key: HADOOP-14686
> URL: https://issues.apache.org/jira/browse/HADOOP-14686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit
>Affects Versions: 2.7.4
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-14686-branch-2.7.v0.patch, 
> HADOOP-14686-branch-2.7.v1.patch
>
>
> .gitignore is out of date on branch-2.7, which is causing issues in precommit 
> checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103874#comment-16103874
 ] 

Konstantin Shvachko commented on HADOOP-14686:
--

??branch-2.7's ability to use the current precommit setup is sort of irrelevant 
as far as it's ability to build and get released.??

Not exactly sure what's irrelevant here.
Hope you are not saying you revoked Yetus support for the Hadoop stable 
release. By not integrating HADOOP-11746 into branch-2?
So what is your recommendations for restoring branch-2 build, as it worked 2 
weeks ago?

> Branch-2.7 .gitignore is out of date
> 
>
> Key: HADOOP-14686
> URL: https://issues.apache.org/jira/browse/HADOOP-14686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit
>Affects Versions: 2.7.4
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-14686-branch-2.7.v0.patch, 
> HADOOP-14686-branch-2.7.v1.patch
>
>
> .gitignore is out of date on branch-2.7, which is causing issues in precommit 
> checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103859#comment-16103859
 ] 

Hudson commented on HADOOP-14692:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12060 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12060/])
HADOOP-14692. Upgrade Apache Rat (aw: rev 
5f4808ce73a373e646ce324b0037dca54e8adc1e)
* (edit) pom.xml


> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103829#comment-16103829
 ] 

Wei-Chiu Chuang edited comment on HADOOP-14691 at 7/27/17 8:11 PM:
---

Hi [~Eric88] thanks for the detailed report!
It seems what you discovered is similar to HDFS-10429. 

I have not yet reviewed the patch in depth, but it looks like your patch 
contains irrelevant stuff. Could you remove them and rebase against trunk? 
Thanks.


was (Author: jojochuang):
Hi [~Eric88] thanks for the detailed report!
It seems what you discovered is similar to HDFS-10429. 

I have not yet reviewed the patch in depth, but it looks like your patch 
contains unrelevant stuff. Could you remove them and rebase against trunk? 
Thanks.

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> 

[jira] [Commented] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103829#comment-16103829
 ] 

Wei-Chiu Chuang commented on HADOOP-14691:
--

Hi [~Eric88] thanks for the detailed report!
It seems what you discovered is similar to HDFS-10429. 

I have not yet reviewed the patch in depth, but it looks like your patch 
contains unrelevant stuff. Could you remove them and rebase against trunk? 
Thanks.

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> 

[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103825#comment-16103825
 ] 

Sean Busbey commented on HADOOP-14672:
--

Give me tonight to figure out if I have concerns.

> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14692:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thanks. Committed.

> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103814#comment-16103814
 ] 

Junping Du commented on HADOOP-14672:
-

[~busbey], any further comments here? If not, I will go ahead to commit the 
latest patch.

> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103793#comment-16103793
 ] 

Hadoop QA commented on HADOOP-14692:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14692 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879206/HADOOP-14692.00.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux a185441c66c9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27a1a5f |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12872/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12872/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12872/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103741#comment-16103741
 ] 

Anu Engineer commented on HADOOP-14692:
---

+1, Thanks for your help. This is needed for  HDFS-12034, Going to mark that 
JIRA as dependent on this one. 

> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103645#comment-16103645
 ] 

Allen Wittenauer commented on HADOOP-14667:
---

I'm trying to make this work on the fly, but putting the upgraded files 
elsewhere.  devenv is particularly finicky and may be forced to change the 
existing files in-tree.  As long as committers don't try to commit upgraded 
files, this might be ok. ?

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14688:
---
Attachment: GC root of the String.png

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-07-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103609#comment-16103609
 ] 

Xiao Chen commented on HADOOP-14688:


The heapdumps are too big to attach here, so I uploaded a screenshot of the 
most relevant analysis result out of it.

The 2 most duplicated strings (mG... and 0O...) are the 2 key version names. I 
was running re-encryption on a zone with 1M files. 2 different key versions 
were among those files in this run.

Verified after interning, this goes away.

[~daryn], do you think this makes sense? Thanks!

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14688.01.patch, heapdump analysis.png
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14688:
---
Attachment: heapdump analysis.png

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14688.01.patch, heapdump analysis.png
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier

2017-07-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103583#comment-16103583
 ] 

Chris Douglas commented on HADOOP-11875:


bq. Can I commit the latest patch to trunk
Yes, please go ahead.

Long-term, Hamlet will probably become part of MapReduce. If downstream folks 
need a separate jar, that's not too hard to produce.

> [JDK9] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira Ajisaka
>  Labels: webapp
> Attachments: build_error_dump.txt, HADOOP-11875.01.patch, 
> HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, 
> HADOOP-11875.05.patch, HADOOP-11875.06.patch, HADOOP-11875.07.patch, 
> HADOOP-11875.10.patch, HADOOP-11875.11.patch
>
>
> From JDK9, _ as a one-character identifier is banned. Currently Web UI uses 
> it. We should fix them to compile with JDK9. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14692:
--
Status: Patch Available  (was: Open)

> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14692:
--
Attachment: HADOOP-14692.00.patch

-00:
* upgrade from 10 to 12 

> Upgrade Apache Rat
> --
>
> Key: HADOOP-14692
> URL: https://issues.apache.org/jira/browse/HADOOP-14692
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Trivial
> Attachments: HADOOP-14692.00.patch
>
>
> We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14692) Upgrade Apache Rat

2017-07-27 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14692:
-

 Summary: Upgrade Apache Rat
 Key: HADOOP-14692
 URL: https://issues.apache.org/jira/browse/HADOOP-14692
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Trivial


We should upgrade Apache RAT to something modern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13809) hive: 'java.lang.IllegalStateException(zip file closed)'

2017-07-27 Thread frank luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103509#comment-16103509
 ] 

frank luo commented on HADOOP-13809:


I believe hive-11681 and this one all related to 
https://bugs.openjdk.java.net/browse/JDK-6947916, which hasn't been released.

I am able to recreate it with oracle jdk 1.8.0_131.

> hive: 'java.lang.IllegalStateException(zip file closed)'
> 
>
> Key: HADOOP-13809
> URL: https://issues.apache.org/jira/browse/HADOOP-13809
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Adriano
>
> Randomly some of the hive queries are failing with the below exception on 
> HS2: 
> {code}
> 2016-11-07 02:36:40,996 ERROR org.apache.hadoop.hive.ql.exec.Task: 
> [HiveServer2-Background-Pool: Thread-1823748]: Ended Job = 
> job_1478336955303_31030 with exception 'java.lang.IllegalStateException(zip 
> file 
>  closed)' 
> java.lang.IllegalStateException: zip file closed 
> at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634) 
> at java.util.zip.ZipFile.getEntry(ZipFile.java:305) 
> at java.util.jar.JarFile.getEntry(JarFile.java:227) 
> at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
>  
> at 
> java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233) 
> at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
>  
> at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
>  
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) 
> at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>  
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2526) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:982) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2032) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:484) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:474) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:210) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:596) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:594) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:594) 
> at 
> org.apache.hadoop.mapred.JobClient.getTaskReports(JobClient.java:665) 
> at 
> org.apache.hadoop.mapred.JobClient.getReduceTaskReports(JobClient.java:689) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:272)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:435) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) 
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) 
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1770) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1527) 
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1306) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1115) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108) 
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 

[jira] [Commented] (HADOOP-13809) hive: 'java.lang.IllegalStateException(zip file closed)'

2017-07-27 Thread frank luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103420#comment-16103420
 ] 

frank luo commented on HADOOP-13809:


we are seeing it happening once in several days on hdp 2.5.3, jdk1.7.0_67, 
inside of hiveserver2 log. 

> hive: 'java.lang.IllegalStateException(zip file closed)'
> 
>
> Key: HADOOP-13809
> URL: https://issues.apache.org/jira/browse/HADOOP-13809
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Adriano
>
> Randomly some of the hive queries are failing with the below exception on 
> HS2: 
> {code}
> 2016-11-07 02:36:40,996 ERROR org.apache.hadoop.hive.ql.exec.Task: 
> [HiveServer2-Background-Pool: Thread-1823748]: Ended Job = 
> job_1478336955303_31030 with exception 'java.lang.IllegalStateException(zip 
> file 
>  closed)' 
> java.lang.IllegalStateException: zip file closed 
> at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634) 
> at java.util.zip.ZipFile.getEntry(ZipFile.java:305) 
> at java.util.jar.JarFile.getEntry(JarFile.java:227) 
> at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132) 
> at 
> sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
>  
> at 
> java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233) 
> at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
>  
> at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
>  
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) 
> at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>  
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2526) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:982) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2032) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:484) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:474) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:210) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:596) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:594) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:594) 
> at 
> org.apache.hadoop.mapred.JobClient.getTaskReports(JobClient.java:665) 
> at 
> org.apache.hadoop.mapred.JobClient.getReduceTaskReports(JobClient.java:689) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:272)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
>  
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:435) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) 
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) 
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1770) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1527) 
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1306) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1115) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108) 
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
>  
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Attachment: (was: hadoop-2.7.3-src.patch)

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Status: Patch Available  (was: Open)

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Attachment: hadoop-2.7.3-src.patch

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Attachment: hadoop_common_unit_test_result_after_modification.docx
hadoop_common_unit_test_result_before_modification.docx
hadoop-2.7.3-src.patch

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: hadoop-2.7.3-src.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Status: Open  (was: Patch Available)

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 

[jira] [Updated] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lei updated HADOOP-14691:
--
Status: Patch Available  (was: Open)

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>  Labels: close, filesystem, hadoop, multi
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 

[jira] [Commented] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102974#comment-16102974
 ] 

Vishwajeet Dusane commented on HADOOP-14678:


Thanks [~jzhuge] - +1 on 002.patch.

> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-14678.001.patch, HADOOP-14678.002.patch
>
>
> https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122
> It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-07-27 Thread Eric Lei (JIRA)
Eric Lei created HADOOP-14691:
-

 Summary: Shell command "hadoop fs -put" multiple close problem
 Key: HADOOP-14691
 URL: https://issues.apache.org/jira/browse/HADOOP-14691
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.7.3
 Environment: CentOS7.0
JDK1.8.0_121
hadoop2.7.3
Reporter: Eric Lei


1.  Bug description
Shell command “Hadoop fs -put” is a write operation. In this process, 
FSDataOutputStream is new created and closed lastly. Finally, the 
FSDataOutputStream.close() calls the close method in HDFS to end up the 
communication of this write process between the server and client.
With the command “Hadoop fs -put”, for each created FSDataOutputStream object, 
FSDataOutputStream.close() is called twice, which means the close method, in 
the underlying distributed file system, is called twice. This is the error, 
that’s because the communication process, for example socket, might be repeated 
shut down. Unfortunately, if there is no error protection for the socket, there 
might be error for the socket in the second close. 
Further, we think a correct upper file system design should keep the one time 
close principle. It means that each creation of underlying distributed file 
system object should correspond with close only once. 
For the command “Hadoop fs -put”, there are double close as follows:
a.  The first close process:
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

b.  The second close process:
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 

[jira] [Commented] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102851#comment-16102851
 ] 

Hadoop QA commented on HADOOP-14678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14678 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879121/HADOOP-14678.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9261907bb522 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27a1a5f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12870/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12870/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: 

[jira] [Updated] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14678:

Labels: supportability  (was: )

> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-14678.001.patch, HADOOP-14678.002.patch
>
>
> https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122
> It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14678:

Attachment: HADOOP-14678.002.patch

Patch 002
* Wei-chiu's comment

> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14678.001.patch, HADOOP-14678.002.patch
>
>
> https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122
> It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102813#comment-16102813
 ] 

John Zhuge commented on HADOOP-14678:
-

Thanks [~vishwajeet.dusane]. No functional issue. I was just wondering whether 
we can let IOE thru, but it looks like we do want to catch it.

> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14678.001.patch
>
>
> https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122
> It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-27 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102803#comment-16102803
 ] 

Vishwajeet Dusane commented on HADOOP-14678:


This is a case of Hadoop User Vs Adl User. Adl relies on Azure AD, failing to 
retrieve local hadoop user information does not have implication on the adl 
behavior unlike DFS. Is there a functional issue ?

> AdlFilesystem#initialize swallows exception when getting user name
> --
>
> Key: HADOOP-14678
> URL: https://issues.apache.org/jira/browse/HADOOP-14678
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14678.001.patch
>
>
> https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122
> It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14683) FileStatus.compareTo binary compat issue between 2.7 and 2.8

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102800#comment-16102800
 ] 

Hadoop QA commented on HADOOP-14683:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
| JDK v1.7.0_131 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879112/HADOOP-14683-branch-2-02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32b90de2241b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Created] (HADOOP-14690) RetryInvocationHandler$RetryInfo should override toString()

2017-07-27 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14690:
--

 Summary: RetryInvocationHandler$RetryInfo should override 
toString()
 Key: HADOOP-14690
 URL: https://issues.apache.org/jira/browse/HADOOP-14690
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka
Priority: Minor


{code:title=RetryInvocationHandler.java}
  LOG.trace("#{} processRetryInfo: retryInfo={}, waitTime={}",
  callId, retryInfo, waitTime);
{code}
RetryInfo is used for logging but it does not output useful information.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102785#comment-16102785
 ] 

Allen Wittenauer commented on HADOOP-14686:
---

branch-2.7's ability to use the current precommit setup is sort of irrelevant 
as far as it's ability to build and get released.  Hadoop 2.7.0 was released on 
the same day that HADOOP-11746 was made official. That's important because it 
also means that the test-patch used during 2.7.0's development wasn't the 
rewrite and didn't support branch switching, much less Docker.

> Branch-2.7 .gitignore is out of date
> 
>
> Key: HADOOP-14686
> URL: https://issues.apache.org/jira/browse/HADOOP-14686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit
>Affects Versions: 2.7.4
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-14686-branch-2.7.v0.patch, 
> HADOOP-14686-branch-2.7.v1.patch
>
>
> .gitignore is out of date on branch-2.7, which is causing issues in precommit 
> checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org