[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747598#comment-15747598
 ] 

John Zhuge commented on HADOOP-13901:
-

+1 LGTM (non-binding)

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch, HADOOP-13901.02.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-12-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747597#comment-15747597
 ] 

Xiao Chen commented on HADOOP-13780:


Thanks Sean for the comment.

I have finished up a first draft of #1, shown in the 'Dependencies' tab of this 
jira's linked spreadsheet. Will work on closing the final gaps, and start on #2.

Among those dependencies:
- jdiff is LGPL but according to HADOOP-12893, it's not bundled so we're good.
- ldapsdk is new, I did a quick search in pom but didn't find any. Will look 
more.
- JSON needs some help: [~ajisakaa] [~andrew.wang], apologize for my fading 
memory, but do you recall what was done for that in HADOOP-12893? (Searched the 
jira but no mention from the comments, and in the spreadsheet it's marked 
{{Done?}} == N... I seem to remember all things are done when we posted 
patches/resolved that jira.) Anyways, {{bundled?}} is also N, so I'm guessing 
that's the reason this is omitted at that time.

Ping me if anyone wants edit perm to the spreadsheet. Note that the 
Dependencies and parsed tabs are totally script-generated, and are supposed to 
be replaced in later runs. In case anyone is curious, here's how to (nastily) 
generate:
{noformat}
xiao-MBP:license xiao$ cat step1.sh 
#!/bin/sh -x

# First save spreadsheet to local:
# 'Licenses' tab to licenses.tsv
# 'Overrides' tab to overrides.tsv
# 'parse.py script' tab to parse.py
# 'standardize.py' tab to standardize.py
# 'generate.py script' tab to generate.py

mvn license:aggregate-add-third-party

OUTPUT_DIR=~/Downloads/license/
cp target/generated-sources/license/THIRD-PARTY.txt $OUTPUT_DIR


xiao-MBP:license xiao$ cat step2.sh 
#!/bin/sh -x

python parse.py > parsed.tsv
xiao-MBP:license xiao$ cat step3.sh 
#!/bin/sh -x

python standardize.py
# will generate a standardized.tsv, which is the 'Dependencies' tab in the 
spreadsheet.
{noformat}

> LICENSE/NOTICE are out of date for source artifacts
> ---
>
> Key: HADOOP-13780
> URL: https://issues.apache.org/jira/browse/HADOOP-13780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Xiao Chen
>Priority: Blocker
>
> we need to perform a check that all of our bundled works are properly 
> accounted for in our LICENSE/NOTICE files.
> At a minimum, it looks like HADOOP-10075 introduced some changes that have 
> not been accounted for.
> e.g. the jsTree plugin found at 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
>  does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
> under the MIT option and (b) give proper citation of the original copyright 
> holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747541#comment-15747541
 ] 

Akira Ajisaka commented on HADOOP-13901:


No problem!

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch, HADOOP-13901.02.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747542#comment-15747542
 ] 

Akira Ajisaka commented on HADOOP-13901:


02 patch: excluded the .settings directory.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch, HADOOP-13901.02.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747539#comment-15747539
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

[~yuanbo], the issue you hit is IBM JDK specific. TestWebDelegationToken was 
failing even without HADOOP-13565.
Based on that, I think the failure is not related to either HADOOP-13565 or 
HADOOP-13890 we are trying to solve here. 
If you want, we could fix IBM JDK issue for TestWebDelegationToken in a 
separate ticket later.

IBM JDK 8
{code}
[root@c6404 hadoop]# /opt/ibm/java-x86_64-80/bin/java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build pxa6480sr3fp12-20160919_01(SR3 FP12))
IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References 
20160915_318796 (JIT enabled, AOT enabled)
J9VM - R28_Java8_SR3_20160915_0912_B318796
JIT  - tr.r14.java.green_20160818_122998
GC   - R28_Java8_SR3_20160915_0912_B318796_CMPRSS
J9CL - 20160915_318796)
JCL - 20160914_01 based on Oracle jdk8u101-b13
{code}

Git info after revert HADOOP-13565.
{code}
commit b5a719486112fa1ca60bee5eec81f41a0828b928
Author: root 
Date:   Wed Dec 14 07:10:36 2016 +
Revert "HADOOP-13565. KerberosAuthenticationHandler#authenticate should not 
rebuild SPN based on client request. Contributed by Xiaoyu Yao."

This reverts commit 4c38f11cec0664b70e52f9563052dca8fb17c33f.
{code} 

The test still failed even after revert HADOOP-13565. 
{code}
Tests run: 12, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 10.748 sec <<< 
FAILURE! - in 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
testKerberosDelegationTokenAuthenticator(org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken)
  Time elapsed: 1.571 sec  <<< ERROR!
javax.security.auth.login.LoginException: Bad JAAS configuration: unrecognized 
option: isInitiator
at 
com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:23)
at 
com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:57)
at 
com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:686)
at 
com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
at 
java.security.AccessController.doPrivileged(AccessController.java:686)
at 
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
{code}

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, HADOOP-13890.04.patch, 
> test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> 

[jira] [Updated] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13901:
---
Attachment: HADOOP-13901.02.patch

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch, HADOOP-13901.02.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747438#comment-15747438
 ] 

Hadoop QA commented on HADOOP-13890:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 66 unchanged - 1 fixed = 67 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843154/HADOOP-13890.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8836f9967b6d 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ada876c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11270/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11270/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11270/testReport/ |
| modules | 

[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13890:

Attachment: HADOOP-13890.04.patch

Attach patch v04 to enable KerberosAuthenticationHandler trace log for tests in 
TestWebDelegationToken.java.


> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, HADOOP-13890.04.patch, 
> test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747314#comment-15747314
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

[~yuanbo], do you use Oracle JDK or IBM JDK? What's your JDK version? It passed 
on my machine with Oracle JDK 1.8.

Can you enable trace log for KerberosAuthenticatonHandler during the test run by
1. adding the following to the end of TestWebDelegationToken#setUp() 
{code}
GenericTestUtils.setLogLevel(KerberosAuthenticationHandler.LOG, 
Level.TRACE);
{code}
2. change KerberosAuthenticationHandler.LOG to public in 
KerberosAuthenticator.java.
and reattach the log with additional trace enabled? This will reveal the cause 
of SPNEGO failure from server side. Thanks in advance!

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747244#comment-15747244
 ] 

John Zhuge commented on HADOOP-13901:
-

[~ajisakaa] No more ASF license issue after git clean. Sorry for the noise.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747144#comment-15747144
 ] 

Yuanbo Liu commented on HADOOP-13890:
-

[~xyao] Thanks for your response.
I've attached my test failure information and this is my git status info:
{code}
# Changes not staged for commit:
#   (use "git add ..." to update what will be committed)
#   (use "git checkout -- ..." to discard changes in working directory)
#
#   modified:   
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
#   modified:   
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
#   modified:   
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
#
no changes added to commit (use "git add" and/or "git commit -a")
{code}


> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13890:

Attachment: test-failure.txt

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, test-failure.txt
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747092#comment-15747092
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

[~yuanbo], thanks for trying the patch.  Can you post or attach the test logs 
of the failed test?  Here is the result on my local machine after v3 patch, 
which has all passed.
{code}
---
 T E S T S
---
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.559 sec - in 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

Results :

Tests run: 12, Failures: 0, Errors: 0, Skipped: 0

{code}


> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747036#comment-15747036
 ] 

Akira Ajisaka commented on HADOOP-13901:


bq. Looks like we can exclude the .settings directory?
Agreed. I'll update the patch.

bq. Still got 2 ASF license issues for dev-support/bin/test-patch after 
applying the patch:
Hi [~jzhuge], hadoop-ant module has been removed in trunk. Would you run {{git 
clean -d -f}} and retry my patch?

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747009#comment-15747009
 ] 

Yuanbo Liu commented on HADOOP-13890:
-

[~xyao] Thanks for working on this JIRA.
After applying your v3 patch to my local trunk branch, the "Invalid SPNEGO 
sequence" exception still exist in {{TestWebDelegationToken}}.
Have I missed something?

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13891) KerberosName#KerberosName cannot parse principle without realm

2016-12-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu resolved HADOOP-13891.
-
Resolution: Resolved

> KerberosName#KerberosName cannot parse principle without realm
> --
>
> Key: HADOOP-13891
> URL: https://issues.apache.org/jira/browse/HADOOP-13891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Xiaoyu Yao
> Attachments: testKerberosName.patch
>
>
> Given a principal string like "HTTP/localhost", the returned KerberosName 
> object contains a null hostname and null realm name. The service name is 
> incorrectly parsed as whole as "HTTP/localhost".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746771#comment-15746771
 ] 

Aaron Fabbri commented on HADOOP-13899:
---

+1, modulo [~liuml07]'s comments on the DDB stuff.

One comment, [~steve_l]  

{quote}
on a failure to init an implementation, fail, rather than fallback to Null. If 
there is a problem, it must be considered fatal. Otherwise, if dynamoDB is
being authoritative, a client may think it is using, but as it isn't: corruption
{quote}

I wouldn't say "corruption" but "loss of consistency".  Since S3Guard makes 
changes to the MetadataStore only after those changes are made successfully in 
S3, losing your MetadataStore should, at most, mean a return to stock S3A 
behavior.

I think failing fast here makes sense anyways.  I was originally thinking of 
people wanting to continue with an ERROR in "degraded consistency" mode if 
DynamoDB or their MetadataStore was not available.  If that were a concern, we 
could always add a configuration flag "continue on metadatastore init failure". 



> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch, 
> HADOOP-13899-HADOOP-13345-004.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13899:
---
Target Version/s: HADOOP-13345

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch, 
> HADOOP-13899-HADOOP-13345-004.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746694#comment-15746694
 ] 

Mingliang Liu commented on HADOOP-13899:


+1

Thanks [~ste...@apache.org] for tuning this code. The patch looks very good to 
me. I did not realize problems in the code before this patch; we should get 
this patch in. [~fabbri] you may want to have a look as well?

# {quote}
why is only one path to createTable() guarded by a configuration flag?
{quote}
My idea was to make the DynamoDBMetadataStore map to only one Table object that 
is used for all other operations, get/list/delete/put etc. The use cases that 
will create table are 1) S3Guard.getMetadataStore() 2) Command line tool. This 
work just fine for those cases.
# {quote}
given that the likeliest state is "Table exists", shouldn't an attempt to 
getTable() be made before that dynamoDB.createTable() call?
{quote}
{{getTable()}} will simply return a table object  (no network calls). We have 
to call {{table.describe()}} to get a Resource-Not-Exist alike exception and 
create a new table. That will work as well I think though it may be throttled 
in DDB server side. Issue a create table request is hopefully fast and simple 
in this case. Reviewing this, I found the javadoc of {{createTable()}} is not 
very correct.
{code}
  /**
   * Get the existing table and wait for it to become active.
  ...
  */
  void createTable() throws IOException {
{code}
We can change this to "create table if not exist and wait for it to become 
active". Sorry I did not make this correct the initial patch.
# I understand that we should use debug level message for this. I found a 
missing code that should also be addressed. Can you include this in the patch 
as well?
{code:title=DynamoDBClientFactory#createDynamoDBClient()}
  LOG.info("Creating DynamoDBClient for fsUri {} in region {}",
  fsUri, region);
{code}

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch, 
> HADOOP-13899-HADOOP-13345-004.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Ability to clean up subprocesses spawned by Shell when the process exits

2016-12-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746593#comment-15746593
 ] 

Hudson commented on HADOOP-13709:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10991 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10991/])
HADOOP-13709. Ability to clean up subprocesses spawned by Shell when the 
(jlowe: rev 9947aeb60c3dd075544866fd6e4dab0ad8b4afa2)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShell.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


> Ability to clean up subprocesses spawned by Shell when the process exits
> 
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746550#comment-15746550
 ] 

Xiaoyu Yao commented on HADOOP-13565:
-

Thanks [~daryn]. The problem is in HADOOP-13565, we enforce an additional 
principal check requiring SPNEGO principal to have three complete parts: HTTP, 
hostname and realm. This prevents principal like HTTP/localhost from being 
used. 

By relaxing the requirement on realm parts, we maintain the support for 
principals like HTTP/host. Unlike the first two patches for HADOOP-13890, the 
3rd one is a simpler fix that addresses the compatibility concerns without 
changing the original unit tests. To make this work, we also found and fixed 
the KerberosName parsing bug to handle principals like HTTP/host. Please review 
and let me know your thoughts.

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch, HADOOP-13565.03.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Ability to clean up subprocesses spawned by Shell when the process exits

2016-12-13 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746547#comment-15746547
 ] 

Eric Badger commented on HADOOP-13709:
--

Fantastic! Thanks, [~jlowe], [~andrew.wang], [~daryn]! I'll put up an 
associated patch on YARN-5641 to add it to the localizer.

> Ability to clean up subprocesses spawned by Shell when the process exits
> 
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Ability to clean up subprocesses spawned by Shell when the process exits

2016-12-13 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-13709:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks to [~ebadger] for the contribution and to [~andrew.wang] and [~daryn] 
for additional review!  I committed this to trunk and branch-2.

> Ability to clean up subprocesses spawned by Shell when the process exits
> 
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746534#comment-15746534
 ] 

John Zhuge commented on HADOOP-13901:
-

My changes removed {{maven-antrun-plugin}} sections from {{hadoop-kms/pom.xml}} 
and {{hadoop-hdfs-httpfs/pom.xml}}.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Ability to clean up subprocesses spawned by Shell when the process exits

2016-12-13 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-13709:

Summary: Ability to clean up subprocesses spawned by Shell when the process 
exits  (was: Clean up subprocesses spawned by Shell.java:runCommand when the 
shell process exits)

> Ability to clean up subprocesses spawned by Shell when the process exits
> 
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746528#comment-15746528
 ] 

Jason Lowe commented on HADOOP-13709:
-

+1 lgtm.  Committing this.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13886:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
>  Time elapsed: 10.011 sec  <<< FAILURE!
> java.lang.AssertionError: after rename(srcFilePath, destFilePath): 
> directories_created expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)
> More details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746374#comment-15746374
 ] 

Mingliang Liu commented on HADOOP-13886:


test: us-west-1 (North California)

The test pass with this patch when {{fs.s3a.metadatastore.impl}} is a) 
undefined b) NullMetadataStore c) LocalMetadataStore d) DynamoDBMetadataStore

Thanks [~ste...@apache.org] for your review. You're right we should not skip 
integration tests.

Committed to feature branch {{HADOOP-13345}}.

> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
>  Time elapsed: 10.011 sec  <<< FAILURE!
> java.lang.AssertionError: after rename(srcFilePath, destFilePath): 
> directories_created expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)
> More details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-12-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746346#comment-15746346
 ] 

Jason Lowe commented on HADOOP-13578:
-

The thing I'm worried about is that when we call ZSTD_compressStream we are 
passing descriptors for both the input buffer and the output buffer.  When we 
call ZSTD_endStream we are only passing the descriptor for the output buffer.  
Therefore I don't know how ZSTD_endStream is supposed to finish consuming any 
input that ZSTD_compressStream didn't get to if it doesn't have access to that 
input buffer descriptor.

Looking at the zstd code you'll see that when it does call ZSTD_compressStream 
inside ZSTD_endStream, it's calling it with srcSize == 0.  That means there is 
no more source to consume.  So if the last call of the JNI code to 
ZSTD_compressStream did not fully consume the input buffer's data (i.e.: input 
pos is not moved to the end of the data) then it looks like calling 
ZSTD_endStream will simply flush out what input data did make it and then end 
the frame.  That matches what the documentation for ZSTD_endStream says.  So I 
still think we need to make sure we do not call ZSTD_endStream if input.pos is 
not at the end of the input buffer after we call ZSTD_compressStream, or we 
risk losing the last chunk of data if the zstd library for some reason cannot 
fully consume the input buffer when we try to finish.


> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch, 
> HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, HADOOP-13578.v4.patch, 
> HADOOP-13578.v5.patch, HADOOP-13578.v6.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-13 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13898:
--
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for your contribution Fei Hui! I've committed this to branch-2. It 
should be released in 2.9.0

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.9.0
>
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-13 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746274#comment-15746274
 ] 

Ravi Prakash commented on HADOOP-13898:
---

Looks good to me. +1. Committing to branch-2 shortly.

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2016-12-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13508:
-
Target Version/s: 3.0.0-alpha1, 2.9.0  (was: 3.0.0-alpha1)

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, 
> HADOOP-13508.003.patch, HADOOP-13508.004.patch, HADOOP-13508.005.patch, 
> HADOOP-13508.006.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-12-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746200#comment-15746200
 ] 

Sean Busbey commented on HADOOP-13780:
--

yeah that sounds like a good plan. Anything we come up with for precommit 
checking is going to be a heuristic that can always be improved.

> LICENSE/NOTICE are out of date for source artifacts
> ---
>
> Key: HADOOP-13780
> URL: https://issues.apache.org/jira/browse/HADOOP-13780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Xiao Chen
>Priority: Blocker
>
> we need to perform a check that all of our bundled works are properly 
> accounted for in our LICENSE/NOTICE files.
> At a minimum, it looks like HADOOP-10075 introduced some changes that have 
> not been accounted for.
> e.g. the jsTree plugin found at 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
>  does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
> under the MIT option and (b) give proper citation of the original copyright 
> holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-12-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746176#comment-15746176
 ] 

Xiao Chen commented on HADOOP-13780:


In the faith of unblocking 3.0.0-alpha2 release, how do people feel about doing 
#1 and #2 from [my above 
comment|https://issues.apache.org/jira/browse/HADOOP-13780?focusedCommentId=15723765=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15723765]
 in this jira, and defer the automation #3 to another jira? #1 is almost done, 
and #2 shouldn't be too hard. So should be able to post a patch this week.

I have some local nasty scripts to sort of automate this, with some things that 
need manual inspection. However even myself feel those scripts are not to the 
standards... don't want them to block our mighty hadoop release.

> LICENSE/NOTICE are out of date for source artifacts
> ---
>
> Key: HADOOP-13780
> URL: https://issues.apache.org/jira/browse/HADOOP-13780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Xiao Chen
>Priority: Blocker
>
> we need to perform a check that all of our bundled works are properly 
> accounted for in our LICENSE/NOTICE files.
> At a minimum, it looks like HADOOP-10075 introduced some changes that have 
> not been accounted for.
> e.g. the jsTree plugin found at 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
>  does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
> under the MIT option and (b) give proper citation of the original copyright 
> holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2016-12-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13508:
-
Attachment: HADOOP-13508.branch-2.patch

Submit branch-2 patch for precommit check.

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, 
> HADOOP-13508.003.patch, HADOOP-13508.004.patch, HADOOP-13508.005.patch, 
> HADOOP-13508.006.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2016-12-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HADOOP-13508:
--

I am reopening this issue to backport the fix to branch-2.

Please shout out if you think this is incompatible change (e.g. downstream 
applications that depends on existing semantics). Thanks.

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, 
> HADOOP-13508.003.patch, HADOOP-13508.004.patch, HADOOP-13508.005.patch, 
> HADOOP-13508.006.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13902) Update documentation to reflect IPv6 Hadoop cluster setup

2016-12-13 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HADOOP-13902:


 Summary: Update documentation to reflect IPv6 Hadoop cluster setup
 Key: HADOOP-13902
 URL: https://issues.apache.org/jira/browse/HADOOP-13902
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Konstantin Shvachko


Document IPv6 cluster setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746033#comment-15746033
 ] 

Sean Busbey commented on HADOOP-13901:
--

the entries for {{.classpath}}, {{.project}}, and {{.settings}} only get 
flagged if a module is removed from the project (so like in precommit when 
something is moved, or if someone was switching between branches without 
cleaning up). I'm not sure we need to exclude them.

[~jzhuge] is there a {{hadoop-tools/hadoop-ant}} module in your checkout that 
corresponds to those target directories? the RAT plugin should already be 
ignoring the {{target}} directory for anything maven knows about.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746024#comment-15746024
 ] 

John Zhuge commented on HADOOP-13901:
-

Still got 2 ASF license issues for dev-support/bin/test-patch after applying 
the patch:
{noformat}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? /home/jzhuge/hadoop/hadoop-tools/hadoop-ant/target/antrun/build-main.xml
 !? /home/jzhuge/hadoop/hadoop-tools/hadoop-ant/target/.plxarc
{noformat}

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15746009#comment-15746009
 ] 

Mingliang Liu commented on HADOOP-13901:


Thanks for reporting and providing a patch.

{code}
326 .settings/org.eclipse.jdt.core.prefs
{code}

Looks like we can exclude the {{.settings}} directory?

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-13 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745995#comment-15745995
 ] 

Eric Badger commented on HADOOP-13709:
--

The test failures are related to HADOOP-13890/HADOOP-13565 and are unrelated to 
the patch

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745952#comment-15745952
 ] 

Hudson commented on HADOOP-13900:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10990 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10990/])
HADOOP-13900. Remove snapshot version of SDK dependency from Azure Data 
(liuml07: rev ef34bf2bb92a4e8def6617b185ae72db81450de8)
* (edit) hadoop-tools/hadoop-azure-datalake/pom.xml


> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745907#comment-15745907
 ] 

Vishwajeet Dusane commented on HADOOP-13900:


@Thank you [~liuml07] and [~ste...@apache.org] for review.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745897#comment-15745897
 ] 

John Zhuge edited comment on HADOOP-13901 at 12/13/16 6:44 PM:
---

Thanks [~ajisakaa] for fixing the issue, quite annoying. I will try the patch 
when I run dev-support/bin/test-patch locally.


was (Author: jzhuge):
Thanks [~ajisakaa] for reporting the issue, quite annoying. I will try the 
patch when I run dev-support/bin/test-patch locally.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745897#comment-15745897
 ] 

John Zhuge commented on HADOOP-13901:
-

Thanks [~ajisakaa] for reporting the issue, quite annoying. I will try the 
patch when I run dev-support/bin/test-patch locally.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13900:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} branch. Thanks [~vishwajeet.dusane] for contribution, 
and thanks [~ste...@apache.org] for review and offline discussion.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13901:
---
Assignee: Akira Ajisaka
  Status: Patch Available  (was: Open)

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13901:
---
Attachment: HADOOP-13901.01.patch

The files are generated by other plugins, so we can ignore them. Attaching a 
simple patch to ignore them.

> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
> Attachments: HADOOP-13901.01.patch
>
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-13901:
--

 Summary: Fix ASF License warnings
 Key: HADOOP-13901
 URL: https://issues.apache.org/jira/browse/HADOOP-13901
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


Hadoop-side of YETUS-473.
https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
{format}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
 !? 
/testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
 !? hadoop-client/.classpath
 !? hadoop-client/.project
 !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
{format}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13901) Fix ASF License warnings

2016-12-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13901:
---
Description: 
Hadoop-side of YETUS-473.
https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
{noformat}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
 !? 
/testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
 !? hadoop-client/.classpath
 !? hadoop-client/.project
 !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
{noformat}

  was:
Hadoop-side of YETUS-473.
https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
{format}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
 !? 
/testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
 !? hadoop-client/.classpath
 !? hadoop-client/.project
 !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
{format}


> Fix ASF License warnings
> 
>
> Key: HADOOP-13901
> URL: https://issues.apache.org/jira/browse/HADOOP-13901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>
> Hadoop-side of YETUS-473.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11239/artifact/patchprocess/patch-asflicense-problems.txt
> {noformat}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-build-tools/maven-eclipse.xml
>  !? 
> /testptch/hadoop/hadoop-build-tools/.externalToolBuilders/Maven_Ant_Builder.launch
>  !? hadoop-client/.classpath
>  !? hadoop-client/.project
>  !? hadoop-client/.settings/org.eclipse.jdt.core.prefs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745791#comment-15745791
 ] 

Vishwajeet Dusane commented on HADOOP-13900:


Sorry [~ste...@apache.org] about snapshot dependency, earlier the SDK version 
was not available in public maven repository hence we had to add snapshot 
dependency. Also thanks for the +1.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745764#comment-15745764
 ] 

Steve Loughran commented on HADOOP-13900:
-

+1. 

we should *never* have any snapshot dependencies in the code, hence, no repos 
pulling them in either.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745761#comment-15745761
 ] 

Hadoop QA commented on HADOOP-13900:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13900 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843037/HDFS-11240-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux d1579ce75e99 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0b033e |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11267/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11267/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745733#comment-15745733
 ] 

Hadoop QA commented on HADOOP-13900:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13900 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843032/HDFS-11240-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 9ec2a57c8747 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0b033e |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11266/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11266/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13900:
---
Attachment: HDFS-11240-002.patch

Removed {{repositories}} commented code from the {{pom.xml}}.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13900:
---
Status: Patch Available  (was: Open)

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745699#comment-15745699
 ] 

Vishwajeet Dusane commented on HADOOP-13900:


Thanks you [~liuml07] for moving this JIRA to appropriately. I will take care 
of this going forward.

I reran the integration test with {{2.0.11}} SDK version and all test (722 
integration test) are passing. {{repositories}} related code is removed, 
unfortunatly patch i posted had the commented code. Raising 2nd iteration with 
removed {{repositories}}.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13900:
---
Status: Open  (was: Patch Available)

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745673#comment-15745673
 ] 

Hadoop QA commented on HADOOP-13900:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843032/HDFS-11240-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ee45fa5c22e6 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0b033e |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17849/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17849/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu moved HDFS-11240 to HADOOP-13900:
---

Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha2)
 Component/s: (was: fs/azure)
  fs/azure
 Key: HADOOP-13900  (was: HDFS-11240)
 Project: Hadoop Common  (was: Hadoop HDFS)

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745667#comment-15745667
 ] 

Mingliang Liu commented on HADOOP-13900:


Moved this to the HADOOP Common project.

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HDFS-11240-001.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-12-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13257:
---
Component/s: fs/azure

> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745566#comment-15745566
 ] 

Hadoop QA commented on HADOOP-13899:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
53s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 7 unchanged - 1 fixed = 13 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843024/HADOOP-13899-HADOOP-13345-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 35615d99118f 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 49965d2 |
| Default Java | 1.8.0_111 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11265/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11265/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11265/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11265/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue 

[jira] [Commented] (HADOOP-13897) TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently

2016-12-13 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745554#comment-15745554
 ] 

Tony Wu commented on HADOOP-13897:
--

Hi [~steve_l], thanks a lot for your response.

{{TestAdlFileContextMainOperationsLive}} does setup the FC to use ADL as 
default FS:
{code}
public class TestAdlFileContextMainOperationsLive
extends FileContextMainOperationsBaseTest {
...
  @Override
  public void setUp() throws Exception {
Configuration conf = AdlStorageConfiguration.getConfiguration();
String fileSystem = conf.get(KEY_FILE_SYSTEM);
if (fileSystem == null || fileSystem.trim().length() == 0) {
  throw new Exception("Default file system not configured.");
}
URI uri = new URI(fileSystem);
FileSystem fs = AdlStorageConfiguration.createStorageConnector();
fc = FileContext.getFileContext(
new DelegateToFileSystem(uri, fs, conf, fs.getScheme(), false) {
}, conf);
super.setUp();
  }
{code}

However the test that's failing is creating a second FC, with default config:
{code}
  @Test
  /*
   * Test method
   *  org.apache.hadoop.fs.FileContext.getFileContext(AbstractFileSystem)
   */
  public void testGetFileContext1() throws IOException {
final Path rootPath = getTestRootPath(fc, "test");
AbstractFileSystem asf = fc.getDefaultFileSystem();
// create FileContext using the protected #getFileContext(1) method:
<<
FileContext fc2 = FileContext.getFileContext(asf); << 2nd 
FC created
>>
// Now just check that this context can do something reasonable:
final Path path = new Path(rootPath, "zoo");
FSDataOutputStream out = fc2.create(path, EnumSet.of(CREATE),
Options.CreateOpts.createParent());
out.close();
Path pathResolved = fc2.resolvePath(path);
assertEquals(pathResolved.toUri().getPath(), path.toUri().getPath());
  }
{code}

{{FileContext.getFileContext()}} uses the default configuration:
{code}
  /**
   * Create a FileContext for specified file system using the default config.
   * 
   * @param defaultFS
   * @return a FileContext with the specified AbstractFileSystem
   * as the default FS.
   */
  protected static FileContext getFileContext(
final AbstractFileSystem defaultFS) {
return getFileContext(defaultFS, new Configuration());
  }
{code}

It looks like {{TestAdlFileContextMainOperationsLive#testGetFileContext1}} 
should be using {{AdlStorageConfiguration.getConfiguration()}} to create 
{{fc2}}. Or maybe {{testGetFileContext1}} should be omitted as the protected 
API {{FileContext#getFileContext(final AbstractFileSystem defaultFS)}} does not 
appear to be used anywhere else but this test case.

The rest of {{TestAdlFileContextMainOperationsLive}} and all other 
{{hadoop-azure-datalake}} live tests passes successfully during my test.

> TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently
> ---
>
> Key: HADOOP-13897
> URL: https://issues.apache.org/jira/browse/HADOOP-13897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: Tony Wu
>
> {{TestAdlFileContextMainOperationsLive#testGetFileContext1}} (this is a live 
> test against Azure Data Lake Store) fails consistently with the following 
> error:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.55 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
>   Time elapsed: 11.229 sec  <<< ERROR!
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
>   at 
> org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
>   at 

[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Attachment: HADOOP-13899-HADOOP-13345-004.patch

Patch 004; 003 with the filename right

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch, 
> HADOOP-13899-HADOOP-13345-004.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Status: Patch Available  (was: Open)

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch, 
> HADOOP-13899-HADOOP-13345-004.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Affects Version/s: (was: 3.0.0-alpha2)
   HADOOP-13345
   Status: Open  (was: Patch Available)

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745520#comment-15745520
 ] 

Steve Loughran commented on HADOOP-13336:
-

yeah, you can't use key:secret@bucket for config. But here's the thing: you 
shouldn't be doing that for security reasons. the sole justification today is 
that it's the only way to do work across accounts. With different config sets 
for different buckets, that use case changes

and yes, it was implicit in my thought's that username only -> config, 
user:pass -> credentials. 

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745507#comment-15745507
 ] 

Hadoop QA commented on HADOOP-13455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843010/HADOOP-13455-HADOOP-13345-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 1583df343c63 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / d354cd1 |
| Default Java | 1.8.0_111 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11263/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11263/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11263/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11263/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
>   

[jira] [Updated] (HADOOP-13893) dynamodb dependency -> compile

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13893:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1
committed. Thanks!

> dynamodb dependency -> compile
> --
>
> Key: HADOOP-13893
> URL: https://issues.apache.org/jira/browse/HADOOP-13893
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13893-HADOOP-13345.000.patch
>
>
> unless/until we can go back to a unified JAR for the AWS SDK, we need to add 
> the dynamoDB dependencies to the compile category, so it gets picked up  
> downstream.
> without this, clients may discover that they cant talk to s3guard endpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745439#comment-15745439
 ] 

Hadoop QA commented on HADOOP-13899:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-13899 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843017/HADOOP-13455-HADOOP-13899-003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11264/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Status: Patch Available  (was: Open)

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Attachment: HADOOP-13455-HADOOP-13899-003.patch

patch 003: actually swallow all exceptions raised in destroy & close in 
teardown.

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13455-HADOOP-13899-003.patch, 
> HADOOP-13899-HADOOP-13345-001.patch, HADOOP-13899-HADOOP-13345-002.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745425#comment-15745425
 ] 

Steve Loughran commented on HADOOP-13455:
-

Test failures are happening in the destroy in teardown; possibly creation/setup 
failed, but that is lost. Added more robustness into HADOOP-13899

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345-002.patch, 
> HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Attachment: HADOOP-13899-HADOOP-13345-002.patch

HADOOP-13899

* in destroy(), do nothing if table==null (fixes some failing tests on jenkins)
* in destroy(), assert that the dynamo DB connection exists. That's done after 
the table check, so its actually the situation: {{table != null && dynamoDB == 
null}}; I'd consider that a problem.
* downgrade shutdown log. We shouldn't be adding more information to the normal 
output logs when things are working.
* a bit more logging at debug
* marked {{createTable()}} as visible for testing.


> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13899-HADOOP-13345-001.patch, 
> HADOOP-13899-HADOOP-13345-002.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Summary: tune dynamodb client & tests  (was: tune dynamodb client & tests; 
document)

> tune dynamodb client & tests
> 
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13899-HADOOP-13345-001.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745391#comment-15745391
 ] 

Steve Loughran commented on HADOOP-13899:
-

Also to consider (but not included in this patch)

# why is only one path to {{createTable()}} guarded by a configuration flag?
# given that the likeliest state is "Table exists", shouldn't an attempt to 
{{getTable()}} be made before that {{dynamoDB.createTable()}} call?

> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13899-HADOOP-13345-001.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13455:

Attachment: HADOOP-13455-HADOOP-13345-002.patch

I wasn't expecting that :)

I'd actually started on some in HADOOP-13899 in the expectation that nobody was 
going to do it. Nice to see I Was wrong!

I've merged in my changes from HADOOP-13899, especially the stuff at the 
beginning about how this is experimental

Added: proofreading, minor formatting changes, a bit on logging (also updating 
test/resources/log4j.properties to match).

FWIW, I'd recommend turning on s3guard in `auth-keys.xml`. That way: no changes 
for git to accidentally commit. I listed both files, without stating a 
preference.

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345-002.patch, 
> HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13455:

Status: Patch Available  (was: Open)

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345-002.patch, 
> HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13455:

Status: Open  (was: Patch Available)

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-13 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745359#comment-15745359
 ] 

Sean Mackrory commented on HADOOP-13336:


Worth pointing out that option C would also break compatibility for some folks. 
If they're currently using s3a://acccess-key:secret-key@bucket/, that'll 
change. We could say that if they provide a username *and* password or if the 
username is not a valid configuration domain (unlikely for someone to be using 
an access key id as configuration keys, I think) that we interpret it the 
current way. But that's a little clunky.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745281#comment-15745281
 ] 

Steve Loughran commented on HADOOP-13899:
-

tested all non-scale integration tests against S3 ireland. I know S3Guard 
shouldn't be going near AWS yet, but it's good to make sure

> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13899-HADOOP-13345-001.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745277#comment-15745277
 ] 

Steve Loughran commented on HADOOP-13831:
-

# assigned to you
# Because yetus/jenkins doesn't test them, we try to keep a strict policy of 
"submitter must attest to having passed tests for the relevant object store", : 
https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure
 . Can you confirm that you have done so.

> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage throttling  affects HBase operations such as archiving old 
> WALS and others. In such cases the storage driver needs to detect and handle 
> the exception. We put in this logic to do the retries however the condition 
> to check for the exception is not always met due to inconsistency in which 
> the manner the error code is passed back. Instead the retry logic should 
> check for http status code (503) which is more reliable and consistent check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13831:

Assignee: Gaurav Kanade

> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage throttling  affects HBase operations such as archiving old 
> WALS and others. In such cases the storage driver needs to detect and handle 
> the exception. We put in this logic to do the retries however the condition 
> to check for the exception is not always met due to inconsistency in which 
> the manner the error code is passed back. Instead the retry logic should 
> check for http status code (503) which is more reliable and consistent check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13897) TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745253#comment-15745253
 ] 

Steve Loughran commented on HADOOP-13897:
-

the setup of the test should be setting up the FC to have ADL as that default 
FS; if it's failing then look at the test setup.

> TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently
> ---
>
> Key: HADOOP-13897
> URL: https://issues.apache.org/jira/browse/HADOOP-13897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: Tony Wu
>
> {{TestAdlFileContextMainOperationsLive#testGetFileContext1}} (this is a live 
> test against Azure Data Lake Store) fails consistently with the following 
> error:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.55 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
>   Time elapsed: 11.229 sec  <<< ERROR!
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
>   at 
> org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
>   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:85)
>   at org.apache.hadoop.fs.FileContext.create(FileContext.java:685)
>   at 
> org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testGetFileContext1(FileContextMainOperationsBaseTest.java:1350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: 

[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745248#comment-15745248
 ] 

Hadoop QA commented on HADOOP-13883:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} root: The patch generated 0 new + 58 unchanged - 3 
fixed = 58 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
39s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13883 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842975/HADOOP-13883.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f421c27bb86f 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0b033e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11262/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11262/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 U: . |
| Console 

[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Attachment: HADOOP-13899-HADOOP-13345-001.patch

Patch 001, to apply against the s3guard patch.

The changes mentioned, and

* {{DynamoDBMetadataStore}} to throw {{InterruptedIOException}} when IO 
operations/waits are interrupted
* {{DynamoDBClientFactory}} to use {{}}.getTrimmed()}} to get config options. 
(this should be done throughout the module as general best practise)
* Move verifyFileStatus methods to S3ATestUtils for reuse; include 
{{status.toString()}} on assertion message.

The patch to {{DynamoDBMetadataStore.initialize()}} to use .getBucket() to 
determine the bucket is critical, the rest are improvements. 

I can see that the change to fail if the requested back end isn't available my 
be something people disagree with, but consider this
* it avoids those support calls "S3 client work slower than promised"
* It avoids those support calls "data overwritten and lost on commits"

We don't want those, especially the latter.

> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13899-HADOOP-13345-001.patch
>
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745061#comment-15745061
 ] 

Steve Loughran edited comment on HADOOP-13899 at 12/13/16 1:49 PM:
---

# {{DynamoDBMetadataStore}} to use {{translateException()}} for failures in in 
{{init}}.
# {{DynamoDBMetadataStore}} uses authority to get bucket 
{{s3afs.getUri().getAuthority()}} when it must be getHost(). Fix: Switch to 
{{S3AFilesystem.getBucket()}}.
# move the public S3Guard constants into S3A.Constants for use by other 
applications.
# Define constants for the standard implementations of the metastore (null, 
local, dynamo)
# on a failure to init an implementation, fail, rather than fallback to Null. 
If there is a problem, it must be considered fatal. Otherwise, if dynamoDB is
being authoritative, a client may think it is using, but as it isn't: corruption
# Document this stuff a bit



was (Author: ste...@apache.org):

# {{DynamoDBMetadataStore}] to use {{translateException()}} for failures in in 
{{init}}.
# {{DynamoDBMetadataStore}] uses authority to get bucket 
{{s3afs.getUri().getAuthority()}} when it must be getHost(). Fix: Switch to 
{{S3AFilesystem.getBucket()}}.
# move the public S3Guard constants into S3A.Constants for use by other 
applications.
# Define constants for the standard implementations of the metastore (null, 
local, dynamo)
# on a failure to init an implementation, fail, rather than fallback to Null. 
If there is a problem, it must be considered fatal. Otherwise, if dynamoDB is
being authoritative, a client may think it is using, but as it isn't: corruption
# Document this stuff a bit


> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745061#comment-15745061
 ] 

Steve Loughran commented on HADOOP-13899:
-


# {{DynamoDBMetadataStore}] to use {{translateException()}} for failures in in 
{{init}}.
# {{DynamoDBMetadataStore}] uses authority to get bucket 
{{s3afs.getUri().getAuthority()}} when it must be getHost(). Fix: Switch to 
{{S3AFilesystem.getBucket()}}.
# move the public S3Guard constants into S3A.Constants for use by other 
applications.
# Define constants for the standard implementations of the metastore (null, 
local, dynamo)
# on a failure to init an implementation, fail, rather than fallback to Null. 
If there is a problem, it must be considered fatal. Otherwise, if dynamoDB is
being authoritative, a client may think it is using, but as it isn't: corruption
# Document this stuff a bit


> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13899) tune dynamodb client & tests; document

2016-12-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13899:

Summary: tune dynamodb client & tests; document  (was: tune dynamodb clien 
& tests; document)

> tune dynamodb client & tests; document
> --
>
> Key: HADOOP-13899
> URL: https://issues.apache.org/jira/browse/HADOOP-13899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
> store and the s3guard code for better use downstream. These are the kind of 
> things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13899) tune dynamodb clien & tests; document

2016-12-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13899:
---

 Summary: tune dynamodb clien & tests; document
 Key: HADOOP-13899
 URL: https://issues.apache.org/jira/browse/HADOOP-13899
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran


While setting up clients for testing dynamo DB, make the tweaks to the dynamo 
store and the s3guard code for better use downstream. These are the kind of 
things we need to round off the code for production use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13883) Add description of -fs option in generic command usage

2016-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745002#comment-15745002
 ] 

Hadoop QA commented on HADOOP-13883:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  7m 
48s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 48s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 42s{color} | {color:orange} root: The patch generated 2 new + 26 unchanged - 
35 fixed = 28 total (was 61) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13883 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842970/HADOOP-13883.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2419a97618bb 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0b033e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11261/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
| compile | 

[jira] [Commented] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently

2016-12-13 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744996#comment-15744996
 ] 

Fei Hui commented on HADOOP-13869:
--

cc [~raviprak]

> using HADOOP_USER_CLASSPATH_FIRST inconsistently
> 
>
> Key: HADOOP-13869
> URL: https://issues.apache.org/jira/browse/HADOOP-13869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13869.001.patch
>
>
> I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently. Somewhere set it 
> true, somewhere set it yes.
> I know it doesn't mattter because it affects classpath once 
> HADOOP_USER_CLASSPATH_FIRST is not empty
> BUT Maybe it's better that using  HADOOP_USER_CLASSPATH_FIRST uniformly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-13 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744993#comment-15744993
 ] 

Fei Hui commented on HADOOP-13898:
--

cc [~raviprak]

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744989#comment-15744989
 ] 

Steve Loughran commented on HADOOP-13336:
-

bq. Also, does this break folks who use FQDN bucket names?

yes, hence my lack of enthusiasm.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744984#comment-15744984
 ] 

Steve Loughran commented on HADOOP-13336:
-

note we don't need to change existing URIs, just new ones, if they have a 
different config from the default. Use the default values, you get what's 
normal. It's just no longer transparent what you've got 

Oh, one little issue: on 2.7-2.8, if you include a user in the authority, but 
no password, e.g. no : or details, you just get login information ignored 
silently. We may want to urgently change 2.8 and 2.7.x to fail here, even 
before we settle on what a good config policy is.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage

2016-12-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13883:
---
Attachment: (was: HADOOP-13883.005.patch)

> Add description of -fs option in generic command usage
> --
>
> Key: HADOOP-13883
> URL: https://issues.apache.org/jira/browse/HADOOP-13883
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13883.001.patch, HADOOP-13883.004.patch, 
> HADOOP-13883.005.patch, HADOOP-13883.addendum001.patch, 
> HADOOP-13883.addendum002.patch
>
>
> Currently the description of option '-fs' is missing in generic command  
> usage in documentation {{CommandManual.md}}. And the users won't know to use 
> this option, while this option already makes sense to {{hdfs dfsadmin}}, 
> {{hdfs fsck}}, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage

2016-12-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13883:
---
Attachment: HADOOP-13883.005.patch

> Add description of -fs option in generic command usage
> --
>
> Key: HADOOP-13883
> URL: https://issues.apache.org/jira/browse/HADOOP-13883
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13883.001.patch, HADOOP-13883.004.patch, 
> HADOOP-13883.005.patch, HADOOP-13883.addendum001.patch, 
> HADOOP-13883.addendum002.patch
>
>
> Currently the description of option '-fs' is missing in generic command  
> usage in documentation {{CommandManual.md}}. And the users won't know to use 
> this option, while this option already makes sense to {{hdfs dfsadmin}}, 
> {{hdfs fsck}}, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13883) Add description of -fs option in generic command usage

2016-12-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13883:
---
Attachment: HADOOP-13883.005.patch

Attached the patch with failed test {{TestPipeApplication#testSubmitter}} 
fixed. Will attached the patch for branch-2.8 later if there are no further 
comments. Thanks for the review.

> Add description of -fs option in generic command usage
> --
>
> Key: HADOOP-13883
> URL: https://issues.apache.org/jira/browse/HADOOP-13883
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13883.001.patch, HADOOP-13883.004.patch, 
> HADOOP-13883.005.patch, HADOOP-13883.addendum001.patch, 
> HADOOP-13883.addendum002.patch
>
>
> Currently the description of option '-fs' is missing in generic command  
> usage in documentation {{CommandManual.md}}. And the users won't know to use 
> this option, while this option already makes sense to {{hdfs dfsadmin}}, 
> {{hdfs fsck}}, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-12-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744762#comment-15744762
 ] 

Steve Loughran edited comment on HADOOP-13826 at 12/13/16 10:27 AM:


Ok, I've sat down for a review. Like I said before, the test needs some work. 
BTW, {{S3AFastOutputStream}} is gone. Are you referring to 
{{S3ABlockOutputStream}}?

h2. {{S3AFileSystem}} 

Aren't there limits on the size of the AWS httpclient pool, or is there 
something I'm missing?


h2. {{ITestConcurrentOps}} 

This must be a subclass of {{AbstractS3ATestBase}}, so only run when 
{{-Dscale}} is set. This will automatically give it extended test timeout, and 
allow that timeout to be overridden in the config or on the maven command line.
* Must use {{path(subpath)}} to get a unique path which works in parallel 
execution, rather than {{new Path("/ITestS3AManyFiles");}}
* {{getRestrictedFileSystem}} can just go "5M" and "10M" when setting sizes
* {{teardown()}} must check auxFS for being null, just in case something went 
wrong in setup.
* Note you can use {{ContractTestUtils.dataset()}} to create a 1MB file; it 
writes in a range of values so it's easier to detect problems on reads. Not 
really needed here, but you should get into the habit of using those methods 
where possible.
* {{testParallelRename}} must give the threads useful names. 
* {{testParallelRename}} should use Callables, so any exceptions raised in 
threads can be raised by test runner. We don't want tests to go wrong and us 
not to notice.
* If {{testParallelRename}} logs exceptions, it must use {{LOG.error()}}
* {{testParallelRename}} must check that the dest files exist, and that the 
source ones don't. Otherwise, it's not actually verifying that the rename 
worked,
only that a parallel series of operations completed.

-{{{setup()}} should actually create the test files by uploading one and then 
(sequentially) copying it. Why? Gives you the S3 copy bandwidth, not the upload 
B/W, and parallelises better.- . We can't do that unless we expose an explicit 
COPY method in S3A FS. Something to consider for testing, maybe also the commit 
logic, though I don't see a need for that (I do want a rename-no-overwrite 
there tho')

Ideally, I'd like that upload/copy to be  in the test itself, as the test 
runner will include its execution time in the test timings. It'd also be 
interesting to use {{ContractTestUtils.NanoTimer}} to time the copy operations, 
so we can get more stats on overall copy times. That's the setup copy calls, as 
well as the parallel ones.


was (Author: ste...@apache.org):
Ok, I've sat down for a review. Like I said before, the test needs some work. 
BTW, {{S3AFastOutputStream}} is gone. Are you referring to 
{{S3ABlockOutputStream}}?

h2. {{S3AFileSystem}} 

Aren't there limits on the size of the AWS httpclient pool, or is there 
something I'm missing?


h2. {{ITestConcurrentOps}} 

This must be a subclass of {{AbstractS3ATestBase}}, so only run when 
{{-Dscale}} is set. This will automatically give it extended test timeout, and 
allow that timeout to be overridden in the config or on the maven command line.
* Must use {{path(subpath)}} to get a unique path which works in parallel 
execution, rather than {{new Path("/ITestS3AManyFiles");}}
* {{getRestrictedFileSystem}} can just go "5M" and "10M" when setting sizes
* {{teardown()}} must check auxFS for being null, just in case something went 
wrong in setup.
* Note you can use {{ContractTestUtils.dataset()}} to create a 1MB file; it 
writes in a range of values so it's easier to detect problems on reads. Not 
really needed here, but you should get into the habit of using those methods 
where possible.
* {{testParallelRename}} must give the threads useful names. 
* {{testParallelRename}} should use Callables, so any exceptions raised in 
threads can be raised by test runner. We don't want tests to go wrong and us 
not to notice.
* If {{testParallelRename}} logs exceptions, it must use {{LOG.error()}}
* {{testParallelRename}} must check that the dest files exist, and that the 
source ones don't. Otherwise, it's not actually verifying that the rename 
worked,
only that a parallel series of operations completed.

{{{setup()}} should actually create the test files by uploading one and then 
(sequentially) copying it. Why? Gives you the S3 copy bandwidth, not the upload 
B/W, and parallelises better.

Ideally, I'd like that upload/copy to be  in the test itself, as the test 
runner will include its execution time in the test timings. It'd also be 
interesting to use {{ContractTestUtils.NanoTimer}} to time the copy operations, 
so we can get more stats on overall copy times. That's the setup copy calls, as 
well as the parallel ones.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: 

  1   2   >