[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Wenxin He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081703#comment-16081703
 ] 

Wenxin He commented on HADOOP-14638:


Thanks for your review and commit, [~ajisakaa].

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14638:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~vincent he]!

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081640#comment-16081640
 ] 

ASF GitHub Bot commented on HADOOP-14638:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/247


> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14436) Remove the redundant colon in ViewFs.md

2017-07-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081669#comment-16081669
 ] 

ASF GitHub Bot commented on HADOOP-14436:
-

Github user aajisaka commented on the issue:

https://github.com/apache/hadoop/pull/223
  
Now this PR is merged. Hi @maobaolong, would you close this PR?


> Remove the redundant colon in ViewFs.md
> ---
>
> Key: HADOOP-14436
> URL: https://issues.apache.org/jira/browse/HADOOP-14436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1, 3.0.0-alpha2
>Reporter: maobaolong
>Assignee: maobaolong
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14436.patch
>
>
> Minor mistake can led the beginner to the wrong way and getting far away from 
> us.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14629) Improve exception checking in FileContext related JUnit tests

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081667#comment-16081667
 ] 

Akira Ajisaka commented on HADOOP-14629:


+1, thanks Andras.

> Improve exception checking in FileContext related JUnit tests
> -
>
> Key: HADOOP-14629
> URL: https://issues.apache.org/jira/browse/HADOOP-14629
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.2
>
> Attachments: HADOOP-14629.01.patch
>
>
> {{FileContextMainOperationsBaseTest#rename}} and 
> {{TestHDFSFileContextMainOperations#rename}} do the same but different way.
> * FileContextMainOperationsBaseTest is able to distingush exceptions
> * TestHDFSFileContextMainOperations checks the files in case of error
> We should use one rename method with both advantages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-10 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081509#comment-16081509
 ] 

Hongyuan Li commented on HADOOP-14623:
--

Hi [~aw], could you give me a code review?

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14586) org.apache.hadoop.util.Shell in 2.7 breaks on Java 9 RC build; backport HADOOP-10775 to 2.7.x

2017-07-10 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-14586:
-
Attachment: HADOOP-14586-branch-2.7-03.patch

+1 on Uwe's patch. Added JavaDoc comment. Will commit if there are no 
objections.

>  org.apache.hadoop.util.Shell in 2.7 breaks  on Java 9 RC build; 
> backport HADOOP-10775 to 2.7.x
> ---
>
> Key: HADOOP-14586
> URL: https://issues.apache.org/jira/browse/HADOOP-14586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
> Environment: Java 9, build 175 (Java 9 release candidate as of June 
> 25th, 2017)
>Reporter: Uwe Schindler
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: Java9
> Attachments: HADOOP-14586-branch-2.7-01.patch, 
> HADOOP-14586-branch-2.7-02.patch, HADOOP-14586-branch-2.7-03.patch
>
>
> You cannot use any pre-Hadoop 2.8 component anymore with the latest release 
> candidate build of Java 9, because it fails with an 
> StringIndexOutOfBoundsException in {{org.apache.hadoop.util.Shell#}}. 
> This leads to a whole cascade of failing classes (next in chain is 
> StringUtils).
> The reason is that the release candidate build of Java 9 no longer has "-ea" 
> in the version string and the system property "java.version" is now simply 
> "9". This causes the following line to fail fatally:
> {code:java}
>   private static boolean IS_JAVA7_OR_ABOVE =
>   System.getProperty("java.version").substring(0, 3).compareTo("1.7") >= 
> 0;
> {code}
> Analysis:
> - This code looks wrong, as comparing a version this way is incorrect.
> - The {{substring(0, 3)}} is not needed, {{compareTo}} also works without it, 
> although it is still an invalid way to compare a version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2017-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081466#comment-16081466
 ] 

Hudson commented on HADOOP-10829:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11984 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11984/])
HADOOP-10829. Iteration on CredentialProviderFactory.serviceLoader is 
(jitendra: rev f1efa14fc676641fa15c11d3147e3ad948b084e9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java


> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081453#comment-16081453
 ] 

Jitendra Nath Pandey commented on HADOOP-10829:
---

Thanks to Benoy and Rakesh for the patches.

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-10829:
--
Fix Version/s: 3.0.0-beta1
   2.9.0

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-10829:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk and branch-2.

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081402#comment-16081402
 ] 

Hadoop QA commented on HADOOP-14637:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14637 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876511/HADOOP-14637.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4b53dce5269f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5496a34 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12757/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12757/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12757/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081424#comment-16081424
 ] 

Jitendra Nath Pandey commented on HADOOP-14443:
---

I will commit this to branch-2 shortly.

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081440#comment-16081440
 ] 

Jitendra Nath Pandey commented on HADOOP-10829:
---

I will commit this shortly.

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081428#comment-16081428
 ] 

Jitendra Nath Pandey commented on HADOOP-14443:
---

Committed to branch-2. Thanks to Santhosh G Nayak.

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14443:
--
Fix Version/s: 2.9.0

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14443:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081421#comment-16081421
 ] 

Anu Engineer commented on HADOOP-14443:
---

I have ran test-patch on my local machine and verified that this patch works 
correctly on branch 2. +1, for committing, seems like we are having some issue 
with jenkins and branch-2

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14521) KMS client needs retry logic

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081327#comment-16081327
 ] 

Hadoop QA commented on HADOOP-14521:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876487/HADOOP-14521-trunk-10.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux eb7f682feb85 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09653ea |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12755/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Comment Edited] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081229#comment-16081229
 ] 

Daniel Templeton edited comment on HADOOP-14637 at 7/10/17 11:13 PM:
-

Here's a patch with the check removed. (003)


was (Author: templedf):
Here's a patch with the check removed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch, 
> HADOOP-14637.003.patch, HADOOP-14637.004.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081232#comment-16081232
 ] 

Daniel Templeton edited comment on HADOOP-14637 at 7/10/17 11:13 PM:
-

And here's one without removing the check. (004)


was (Author: templedf):
And here's one without removing the check.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch, 
> HADOOP-14637.003.patch, HADOOP-14637.004.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14637:
--
Attachment: HADOOP-14637.003.patch

Here's a patch with the check removed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch, 
> HADOOP-14637.003.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14637:
--
Attachment: HADOOP-14637.004.patch

And here's one without removing the check.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch, 
> HADOOP-14637.003.patch, HADOOP-14637.004.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081172#comment-16081172
 ] 

Jason Lowe commented on HADOOP-14637:
-

The check after looping is always needed, even for the case where wait time is 
a multiple of the check interval.  We should not assume the check call is 
instantaneous nor that we wakeup exactly when requested.  So sounds like 
there's two things to fix here, the missing check after the wait expires and 
fixing the TestRMRestart case to call with correct arguments.  I'm hesitant to 
remove the precondition check, and it caught a bug here.  Is there a real need 
to allow wait < check in practice?


> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081218#comment-16081218
 ] 

Daniel Templeton edited comment on HADOOP-14637 at 7/10/17 9:58 PM:


I was just having exactly that internal debate.  If the call were being made 
programmatically, i.e. with generated wait time and internal, having the check 
could cause an unexpected failure.  Not really a strong argument, but I can't 
come up with any compelling reason to keep the check.  The reason to keep it 
would be to catch the case where the caller gets confused and thinks the wait 
time is the number of retries, but that seems above and beyond for an API.  As 
long as it's clearly documented, it's on the caller not to be dumb.  I don't 
know.  If you feel strongly either way, consider me swayed.


was (Author: templedf):
I was just having exactly that internal debate.  If the call were being made 
programmatically, i.e. with generated wait time and internal, having the check 
could cause an unexpected failure.  Not really a strong argument, but I can't 
come up with any compelling reason to keep the check.  The reason to keep it 
would be to catch the case where the caller gets confused and thinks the wait 
time is the number of retries, but that seems above and beyond for an API.  As 
long as it's clearly documented, it's on the caller to not be dumb.  I don't 
know.  If you feel strongly either way, consider me swayed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch, 
> HADOOP-14637.003.patch, HADOOP-14637.004.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081218#comment-16081218
 ] 

Daniel Templeton edited comment on HADOOP-14637 at 7/10/17 9:52 PM:


I was just having exactly that internal debate.  If the call were being made 
programmatically, i.e. with generated wait time and internal, having the check 
could cause an unexpected failure.  Not really a strong argument, but I can't 
come up with any compelling reason to keep the check.  The reason to keep it 
would be to catch the case where the caller gets confused and thinks the wait 
time is the number of retries, but that seems above and beyond for an API.  As 
long as it's clearly documented, it's on the caller to not be dumb.  I don't 
know.  If you feel strongly either way, consider me swayed.


was (Author: templedf):
I was just having exactly that internal debate.  If the call were being made 
programmatically, i.e. with generated wait time and internal, having the check 
could cause an unexpected failure.  Not really a strong argument, but I can't 
come up with any compelling reason to keep the check.  The reason to keep it 
would be to catch the case where the caller gets confused and thinks the wait 
time is the number of retries, but that seems above and beyond for an API.  As 
long as it's clearly documented, it's on the caller to not be dumb.  I don't 
know.  If you feel strongly either way, consider be swayed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081218#comment-16081218
 ] 

Daniel Templeton commented on HADOOP-14637:
---

I was just having exactly that internal debate.  If the call were being made 
programmatically, i.e. with generated wait time and internal, having the check 
could cause an unexpected failure.  Not really a strong argument, but I can't 
come up with any compelling reason to keep the check.  The reason to keep it 
would be to catch the case where the caller gets confused and thinks the wait 
time is the number of retries, but that seems above and beyond for an API.  As 
long as it's clearly documented, it's on the caller to not be dumb.  I don't 
know.  If you feel strongly either way, consider be swayed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081156#comment-16081156
 ] 

Anu Engineer edited comment on HADOOP-14443 at 7/10/17 9:11 PM:


I have started a pre-commit build here.

https://builds.apache.org/blue/organizations/jenkins/Hadoop-branch2-parameterized/detail/Hadoop-branch2-parameterized/23/pipeline


was (Author: anu):
I have started a pre-commit build here.

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20216/pipeline

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14443:
--
Status: Open  (was: Patch Available)

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14443:
--
Status: Patch Available  (was: Open)

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081156#comment-16081156
 ] 

Anu Engineer commented on HADOOP-14443:
---

I have started a pre-commit build here.

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20216/pipeline

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14535) Support for random access and seek of block blobs

2017-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081133#comment-16081133
 ] 

Steve Loughran commented on HADOOP-14535:
-

Patch 006. This is patch 005 with all the changes I suggested, particularly the 
tests.

The original test suite has a couple of operational flaws
# its slow
#  it leaves 128MB files around. This can be expensive.

I've reworked it to use the same style as {{AbstractSTestS3AHugeFiles}}; using 
ordered names to guarantee the test cases are run in sequence; the final test 
deletes the file. And downsized the file. 
This is lined up for HADOOP-14553, which ports a copy of the same test into 
Azure, and runs tests in parallel. The tests in this method should be something 
which can be merged in to that test, and make it a {{scale}} test for 
configurable size of dataset.

Tested: new suite, yes. Remainder: in progress

{code}
---
Running org.apache.hadoop.fs.azure.TestBlockBlobInputStream
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 212.423 sec - 
in org.apache.hadoop.fs.azure.TestBlockBlobInputStream

Results :

Tests run: 19, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 03:37 min (Wall Clock)
[INFO] Finished at: 2017-07-10T21:46:59+01:00
[INFO] Final Memory: 46M/820M
[INFO] 
{code}


> Support for random access and seek of block blobs
> -
>
> Key: HADOOP-14535
> URL: https://issues.apache.org/jira/browse/HADOOP-14535
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas
>Assignee: Thomas
> Attachments: 
> 0001-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0003-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0004-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0005-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> HADOOP-14535-006.patch
>
>
> This change adds a seek-able stream for reading block blobs to the wasb:// 
> file system.
> If seek() is not used or if only forward seek() is used, the behavior of 
> read() is unchanged.
> That is, the stream is optimized for sequential reads by reading chunks (over 
> the network) in
> the size specified by "fs.azure.read.request.size" (default is 4 megabytes).
> If reverse seek() is used, the behavior of read() changes in favor of reading 
> the actual number
> of bytes requested in the call to read(), with some constraints.  If the size 
> requested is smaller
> than 16 kilobytes and cannot be satisfied by the internal buffer, the network 
> read will be 16
> kilobytes.  If the size requested is greater than 4 megabytes, it will be 
> satisfied by sequential
> 4 megabyte reads over the network.
> This change improves the performance of FSInputStream.seek() by not closing 
> and re-opening the
> stream, which for block blobs also involves a network operation to read the 
> blob metadata. Now
> NativeAzureFsInputStream.seek() checks if the stream is seek-able and moves 
> the read position.
> [^attachment-name.zip]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081150#comment-16081150
 ] 

Daniel Templeton commented on HADOOP-14637:
---

Yeah, I missed the actual bug. :)  My point was that the precondition is 
throwing an exception if the wait time is less than the interval, but that only 
makes sense if the wait time is a multiple of the interval.  The scenario you 
described is the same in any case where that's not true.  Ex: wait=110, 
interval=100 => try, sleep, try, sleep, throw exception.  The final test before 
giving up is required regardless of whether the precondition is removed.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14535) Support for random access and seek of block blobs

2017-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14535:

Attachment: HADOOP-14535-006.patch

> Support for random access and seek of block blobs
> -
>
> Key: HADOOP-14535
> URL: https://issues.apache.org/jira/browse/HADOOP-14535
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas
>Assignee: Thomas
> Attachments: 
> 0001-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0003-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0004-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0005-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> HADOOP-14535-006.patch
>
>
> This change adds a seek-able stream for reading block blobs to the wasb:// 
> file system.
> If seek() is not used or if only forward seek() is used, the behavior of 
> read() is unchanged.
> That is, the stream is optimized for sequential reads by reading chunks (over 
> the network) in
> the size specified by "fs.azure.read.request.size" (default is 4 megabytes).
> If reverse seek() is used, the behavior of read() changes in favor of reading 
> the actual number
> of bytes requested in the call to read(), with some constraints.  If the size 
> requested is smaller
> than 16 kilobytes and cannot be satisfied by the internal buffer, the network 
> read will be 16
> kilobytes.  If the size requested is greater than 4 megabytes, it will be 
> satisfied by sequential
> 4 megabyte reads over the network.
> This change improves the performance of FSInputStream.seek() by not closing 
> and re-opening the
> stream, which for block blobs also involves a network operation to read the 
> blob metadata. Now
> NativeAzureFsInputStream.seek() checks if the stream is seek-able and moves 
> the read position.
> [^attachment-name.zip]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14535) Support for random access and seek of block blobs

2017-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14535:

Status: Open  (was: Patch Available)

> Support for random access and seek of block blobs
> -
>
> Key: HADOOP-14535
> URL: https://issues.apache.org/jira/browse/HADOOP-14535
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas
>Assignee: Thomas
> Attachments: 
> 0001-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0003-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0004-Random-access-and-seek-imporvements-to-azure-file-system.patch, 
> 0005-Random-access-and-seek-imporvements-to-azure-file-system.patch
>
>
> This change adds a seek-able stream for reading block blobs to the wasb:// 
> file system.
> If seek() is not used or if only forward seek() is used, the behavior of 
> read() is unchanged.
> That is, the stream is optimized for sequential reads by reading chunks (over 
> the network) in
> the size specified by "fs.azure.read.request.size" (default is 4 megabytes).
> If reverse seek() is used, the behavior of read() changes in favor of reading 
> the actual number
> of bytes requested in the call to read(), with some constraints.  If the size 
> requested is smaller
> than 16 kilobytes and cannot be satisfied by the internal buffer, the network 
> read will be 16
> kilobytes.  If the size requested is greater than 4 megabytes, it will be 
> satisfied by sequential
> 4 megabyte reads over the network.
> This change improves the performance of FSInputStream.seek() by not closing 
> and re-opening the
> stream, which for block blobs also involves a network operation to read the 
> blob metadata. Now
> NativeAzureFsInputStream.seek() checks if the stream is seek-able and moves 
> the read position.
> [^attachment-name.zip]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-10 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14443:
--
Attachment: HADOOP-14443-branch-2.2.patch

+1 for the latest branch-2 patch. Need a jenkins run before commit.

Renamed the last branch-2 patch for jenkins to pick up.

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> ---
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, 
> HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, 
> HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch, 
> HADOOP-14443-branch-2.2.patch, HADOOP-14443-branch2-2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14044) Synchronization issue in delegation token cancel functionality

2017-07-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081118#comment-16081118
 ] 

Xiao Chen commented on HADOOP-14044:


Thanks [~vinayrpet] for the suggestion.

Just pushed to branch-2.8 and branch-2.7. The cherry-pick was clean. Compiled + 
ran TestZKDelegationTokenSecretManager locally before pushing.

> Synchronization issue in delegation token cancel functionality
> --
>
> Key: HADOOP-14044
> URL: https://issues.apache.org/jira/browse/HADOOP-14044
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: dt_fail.log, dt_success.log, HADOOP-14044-001.patch, 
> HADOOP-14044-002.patch, HADOOP-14044-003.patch
>
>
> We are using Hadoop delegation token authentication functionality in Apache 
> Solr. As part of the integration testing, I found following issue with the 
> delegation token cancelation functionality.
> Consider a setup with 2 Solr servers (S1 and S2) which are configured to use 
> delegation token functionality backed by Zookeeper. Now invoke following 
> steps,
> [Step 1] Send a request to S1 to create a delegation token.
>   (Delegation token DT is created successfully)
> [Step 2] Send a request to cancel DT to S2
>   (DT is canceled successfully. client receives HTTP 200 response)
> [Step 3] Send a request to cancel DT to S2 again
>   (DT cancelation fails. client receives HTTP 404 response)
> [Step 4] Send a request to cancel DT to S1
> At this point we get two different responses.
> - DT cancelation fails. client receives HTTP 404 response
> - DT cancelation succeeds. client receives HTTP 200 response
> Also as per the current implementation, each server maintains an in_memory 
> cache of current tokens which is updated using the ZK watch mechanism. e.g. 
> the ZK watch on S1 will ensure that the in_memory cache is synchronized after 
> step 2.
> After investigation, I found the root cause for this behavior is due to the 
> race condition between step 4 and the firing of ZK watch on S1. Whenever the 
> watch fires before the step 4 - we get HTTP 404 response (as expected). When 
> that is not the case - we get HTTP 200 response along with following ERROR 
> message in the log,
> {noformat}
> Attempted to remove a non-existing znode /ZKDTSMTokensRoot/DT_XYZ
> {noformat}
> From client perspective, the server *should* return HTTP 404 error when the 
> cancel request is sent out for an invalid token.
> Ref: Here is the relevant Solr unit test for reference,
> https://github.com/apache/lucene-solr/blob/746786636404cdb8ce505ed0ed02b8d9144ab6c4/solr/core/src/test/org/apache/solr/cloud/TestSolrCloudWithDelegationTokens.java#L285



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14044) Synchronization issue in delegation token cancel functionality

2017-07-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14044:
---
Fix Version/s: 2.8.2
   2.7.4

> Synchronization issue in delegation token cancel functionality
> --
>
> Key: HADOOP-14044
> URL: https://issues.apache.org/jira/browse/HADOOP-14044
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: dt_fail.log, dt_success.log, HADOOP-14044-001.patch, 
> HADOOP-14044-002.patch, HADOOP-14044-003.patch
>
>
> We are using Hadoop delegation token authentication functionality in Apache 
> Solr. As part of the integration testing, I found following issue with the 
> delegation token cancelation functionality.
> Consider a setup with 2 Solr servers (S1 and S2) which are configured to use 
> delegation token functionality backed by Zookeeper. Now invoke following 
> steps,
> [Step 1] Send a request to S1 to create a delegation token.
>   (Delegation token DT is created successfully)
> [Step 2] Send a request to cancel DT to S2
>   (DT is canceled successfully. client receives HTTP 200 response)
> [Step 3] Send a request to cancel DT to S2 again
>   (DT cancelation fails. client receives HTTP 404 response)
> [Step 4] Send a request to cancel DT to S1
> At this point we get two different responses.
> - DT cancelation fails. client receives HTTP 404 response
> - DT cancelation succeeds. client receives HTTP 200 response
> Also as per the current implementation, each server maintains an in_memory 
> cache of current tokens which is updated using the ZK watch mechanism. e.g. 
> the ZK watch on S1 will ensure that the in_memory cache is synchronized after 
> step 2.
> After investigation, I found the root cause for this behavior is due to the 
> race condition between step 4 and the firing of ZK watch on S1. Whenever the 
> watch fires before the step 4 - we get HTTP 404 response (as expected). When 
> that is not the case - we get HTTP 200 response along with following ERROR 
> message in the log,
> {noformat}
> Attempted to remove a non-existing znode /ZKDTSMTokensRoot/DT_XYZ
> {noformat}
> From client perspective, the server *should* return HTTP 404 error when the 
> cancel request is sent out for an invalid token.
> Ref: Here is the relevant Solr unit test for reference,
> https://github.com/apache/lucene-solr/blob/746786636404cdb8ce505ed0ed02b8d9144ab6c4/solr/core/src/test/org/apache/solr/cloud/TestSolrCloudWithDelegationTokens.java#L285



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14521) KMS client needs retry logic

2017-07-10 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14521:

Attachment: HADOOP-14521-trunk-10.patch

> KMS client needs retry logic
> 
>
> Key: HADOOP-14521
> URL: https://issues.apache.org/jira/browse/HADOOP-14521
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14521.09.patch, HADOOP-14521-trunk-10.patch, 
> HDFS-11804-branch-2.8.patch, HDFS-11804-trunk-1.patch, 
> HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, HDFS-11804-trunk-4.patch, 
> HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, HDFS-11804-trunk-7.patch, 
> HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2017-07-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080998#comment-16080998
 ] 

Allen Wittenauer commented on HADOOP-14597:
---

I think Yi is MIA.  It'd be good to get some of the EC folks involved, since I 
think this is in their code path:  Pinging [~zhz], [~lewuathe], [~andrew.wang] 
to help look this over.

> Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been 
> made opaque
> 
>
> Key: HADOOP-14597
> URL: https://issues.apache.org/jira/browse/HADOOP-14597
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
> Environment: openssl-1.1.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, 
> HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch
>
>
> Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
> with this error
> {code}[WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
>  In function ‘check_update_max_output_len’:
> [WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
>  error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
> evp_cipher_ctx_st}’
> [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
> [WARNING]   ^~
> {code}
> https://github.com/openssl/openssl/issues/962 mattcaswell says
> {quote}
> One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
> version is that many types have been made opaque, i.e. applications are no 
> longer allowed to look inside the internals of the structures
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14521) KMS client needs retry logic

2017-07-10 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14521:

Status: Patch Available  (was: Open)

> KMS client needs retry logic
> 
>
> Key: HADOOP-14521
> URL: https://issues.apache.org/jira/browse/HADOOP-14521
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14521.09.patch, HADOOP-14521-trunk-10.patch, 
> HDFS-11804-branch-2.8.patch, HDFS-11804-trunk-1.patch, 
> HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, HDFS-11804-trunk-4.patch, 
> HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, HDFS-11804-trunk-7.patch, 
> HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14641) hadoop-openstack driver reports input stream leaking

2017-07-10 Thread Chen He (JIRA)
Chen He created HADOOP-14641:


 Summary: hadoop-openstack driver reports input stream leaking
 Key: HADOOP-14641
 URL: https://issues.apache.org/jira/browse/HADOOP-14641
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.3
Reporter: Chen He


[2017-07-07 14:51:07,052] ERROR Input stream is leaking handles by not being 
closed() properly: HttpInputStreamWithRelease working with https://url/logs 
released=false dataConsumed=false 
(org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream:259)
[2017-07-07 14:51:07,052] DEBUG Releasing connection to https://url/logs:  
finalize() (org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease:101)
java.lang.Exception: stack
at 
org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.(HttpInputStreamWithRelease.java:71)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient$10.extractResult(SwiftRestClient.java:1523)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient$10.extractResult(SwiftRestClient.java:1520)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1406)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.doGet(SwiftRestClient.java:1520)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.getData(SwiftRestClient.java:679)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObject(SwiftNativeFileSystemStore.java:276)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream.(SwiftNativeInputStream.java:104)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.open(SwiftNativeFileSystem.java:555)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.open(SwiftNativeFileSystem.java:536)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
at 
com.oracle.kafka.connect.swift.SwiftStorage.exists(SwiftStorage.java:74)
at io.confluent.connect.hdfs.DataWriter.createDir(DataWriter.java:371)
at io.confluent.connect.hdfs.DataWriter.(DataWriter.java:175)
at 
com.oracle.kafka.connect.swift.SwiftSinkTask.start(SwiftSinkTask.java:78)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:231)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:145)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14625) error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080833#comment-16080833
 ] 

Hadoop QA commented on HADOOP-14625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876463/HADOOP-14625.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 45dfcd08362a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09653ea |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12754/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12754/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> error message in S3AUtils.getServerSideEncryptionKey() needs to expand 
> property constant
> 
>
> Key: HADOOP-14625
> URL: https://issues.apache.org/jira/browse/HADOOP-14625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HADOOP-14625.001.patch
>
>

[jira] [Updated] (HADOOP-14625) error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant

2017-07-10 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14625:

Status: Patch Available  (was: Open)

> error message in S3AUtils.getServerSideEncryptionKey() needs to expand 
> property constant
> 
>
> Key: HADOOP-14625
> URL: https://issues.apache.org/jira/browse/HADOOP-14625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HADOOP-14625.001.patch
>
>
> The error message in {{getServerSideEncryptionKey}} says that the property 
> isn't valid, but it doesn't actually expand the constant defining its name:
> {code}
> LOG.error("Cannot retrieve SERVER_SIDE_ENCRYPTION_KEY", e);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13145) In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.

2017-07-10 Thread Adam Kramer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080807#comment-16080807
 ] 

Adam Kramer commented on HADOOP-13145:
--

[~ste...@apache.org] Any idea when 2.8.1 will be released?

> In DistCp, prevent unnecessary getFileStatus call when not preserving 
> metadata.
> ---
>
> Key: HADOOP-13145
> URL: https://issues.apache.org/jira/browse/HADOOP-13145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13145.001.patch, HADOOP-13145.003.patch, 
> HADOOP-13145-branch-2.004.patch, HADOOP-13145-branch-2.8.004.patch
>
>
> After DistCp copies a file, it calls {{getFileStatus}} to get the 
> {{FileStatus}} from the destination so that it can compare to the source and 
> update metadata if necessary.  If the DistCp command was run without the 
> option to preserve metadata attributes, then this additional 
> {{getFileStatus}} call is wasteful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14625) error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant

2017-07-10 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14625:

Attachment: HADOOP-14625.001.patch

Post v001 patch.

> error message in S3AUtils.getServerSideEncryptionKey() needs to expand 
> property constant
> 
>
> Key: HADOOP-14625
> URL: https://issues.apache.org/jira/browse/HADOOP-14625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HADOOP-14625.001.patch
>
>
> The error message in {{getServerSideEncryptionKey}} says that the property 
> isn't valid, but it doesn't actually expand the constant defining its name:
> {code}
> LOG.error("Cannot retrieve SERVER_SIDE_ENCRYPTION_KEY", e);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14625) error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant

2017-07-10 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HADOOP-14625:
---

Assignee: Chen Liang

> error message in S3AUtils.getServerSideEncryptionKey() needs to expand 
> property constant
> 
>
> Key: HADOOP-14625
> URL: https://issues.apache.org/jira/browse/HADOOP-14625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Trivial
>
> The error message in {{getServerSideEncryptionKey}} says that the property 
> isn't valid, but it doesn't actually expand the constant defining its name:
> {code}
> LOG.error("Cannot retrieve SERVER_SIDE_ENCRYPTION_KEY", e);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14535) Support for random access and seek of block blobs

2017-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080668#comment-16080668
 ] 

Steve Loughran commented on HADOOP-14535:
-

This is getting pretty close to going in; most of my feedback is test related. 
1+ iteration should be enough. Once this and HADOOP-14598 are in, I can do some 
downstream testing with real data.

h3. {{AzureNativeFileSystemStore}}

* How about naming the key of {{KEY_INPUT_STREAM_VERSION}} to 
"fs.azure.experimental.stream.version"? That's be consistent with the 
"fs.s3a.experimental" term? 
* log @ debug choice of algorithm, to aid diagnostics
* {{retrieve()}} L2066: the {{PageBlobInputStream}} constructor already wraps 
StorageException with IOE. Retrieve doesn't need to catch and translate them, 
so should catch and then rethrow IOEs as is.

h3. {{BlockBlobInputStream}}

* a seek to the current position can be downgraded to a no-op; no need to close 
& reopen the stream
* you don't need to go {{this.}} when referencing fields. We expect our IDEs to 
colour code fields these days.
* can you have the {{else}} and the {{catch}} statements on the same line as 
the previous clauses closing "}".
* {{read(byte[] buffer, ..)}}. Use {{FSInputStream.validatePositionedReadArgs}} 
for validation, or at least as much of it as is relevant. FWIW, the order of 
checks matches that in InputStream.
* {{closeBlobInputStream}}: should {{blobInputStream=null}} be done in a 
{{finally}} clause so that it is guaranteed to be set (so making the call 
non-reentrant)

h3. {{NativeAzureFileSystem}}

* L625: accidental typo in comment

h3. {{ContractTestUtils.java}}

revert move of {{elapsedTime()}} to a single line method, use multiline style 
for the new {{elapsedTimeMs()}}. 


h3. {{TestBlockBlobInputStream}}

# I like the idea of using the ratio as a way of comparing performance; it 
makes it independent of bandwidth.
# And I agree, you can't reliably assess real-world perf. But it would seem 
faster.
# Once HADOOP-14553 is in, this test would be uprated to a scale test; only 
executed with the -Dscale option, 
and configurable for larger sizes of data. No need to worry about it. I think 
the tests could perhaps even be moved into the 
[ITestAzureHugeFiles|https://github.com/steveloughran/hadoop/blob/azure/HADOOP-14553-testing/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/ITestAzureHugeFiles.java]
 test, which forces a strict ordering of tests in junit, so can have one test 
to upload a file, one to delete, and some in between to play with reading and 
seeking.

for now

* {{TestBlockBlobInputStream}} to extend {{AbstractWasbTestBase}}. This will 
aid migration to the parallel test runner of HADOOP-14553
* {{TestBlockBlobInputStream}} teardown only closes one of the input streams.
* {{toMbps()}}:  would it be better or worse to do the *8 before the / 1000.0? 
Or, given these are floating point, moot?
* split {{testMarkSupported()}} into a separate test for each separate stream; 
assertion in {{validateMarkSupported}} to include some text.
* same for {{testSkipAndAvailableAndPosition}}
* {{testSequentialReadPerformance}} are we confident that the {{v2ElapsedMs}} 
read time will always be >0? Otherewise that division will fail.
* {{testRandomRead}} and {{testSequentialRead}} to always close the output 
stream. Or save a refernce to the stream into a field and have the @After 
teardown close it (quietly)
* {{validateMarkAndReset, validateSkipBounds}} to use 
{{GenericTestUtils.assertExceptionContains}} to validate caught exception, or
 {{LambdaTestUtils.intercept}} to structure expected failure. Have a look at 
other uses in the code for details. +Same in other tests.

{code}
try {
  seekCheck(in, dataLen + 3);
  Assert.fail("Seek after EOF should fail.");
} catch (IOException e) {
  GenericTestUtils.assertExceptionContains("Cannot seek after EOF", e);
}
{code}

LambdaTestUtils may seem a bit more convoluted

{code}
intercept(IOException.class, expected,
new Callable() {
  @Override
  public S3AEncryptionMethods call() throws Exception {
return getAlgorithm(alg, key);
  }
});
{code}

But it really comes out to play in Java 8:

{code}
intercept(IOException.class, expected,
() -> getAlgorithm(alg, key));
{code}


That's why I'd recommend adopting it now.





Other

h3. {{AzureBlobStorageTestAccount}}

* L96; think some tabs have snuck in.
* I have problem in that every test run leaks wasb containers. Does this patch 
continue or even worsen the tradition?



> Support for random access and seek of block blobs
> -
>
> Key: HADOOP-14535
> URL: https://issues.apache.org/jira/browse/HADOOP-14535
> Project: Hadoop Common
>  Issue Type: Improvement
> 

[jira] [Resolved] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy

2017-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14626.
-
Resolution: Invalid

> NoSuchMethodError in 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
> 
>
> Key: HADOOP-14626
> URL: https://issues.apache.org/jira/browse/HADOOP-14626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: mixed JARs on CP
>Reporter: saurab
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy

2017-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14626:
-

> NoSuchMethodError in 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
> 
>
> Key: HADOOP-14626
> URL: https://issues.apache.org/jira/browse/HADOOP-14626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: mixed JARs on CP
>Reporter: saurab
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy

2017-07-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14626:

Affects Version/s: 2.6.0
  Environment: mixed JARs on CP

> NoSuchMethodError in 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
> 
>
> Key: HADOOP-14626
> URL: https://issues.apache.org/jira/browse/HADOOP-14626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: mixed JARs on CP
>Reporter: saurab
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14634) Remove jline from main Hadoop pom.xml

2017-07-10 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080654#comment-16080654
 ] 

Ray Chiang commented on HADOOP-14634:
-

Thanks for the commit Steve!

It's going to be good to have all these sorts of things cleaned up in time for 
the final Hadoop 3 release.

> Remove jline from main Hadoop pom.xml
> -
>
> Key: HADOOP-14634
> URL: https://issues.apache.org/jira/browse/HADOOP-14634
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14634.001.patch
>
>
> A long time ago, HADOOP-9342 removed jline from being included in the Hadoop 
> distribution.  Since then, more modules have added Zookeeper, and are pulling 
> in jline again.
> Recommend excluding jline from the main Hadoop pom in order to prevent 
> subsequent additions of Zookeeper dependencies from doing this again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080601#comment-16080601
 ] 

Jason Lowe commented on HADOOP-14637:
-

I think the precondition check really did catch a bug here.  From the 
TestRMRestart code:
{code}
final int maxRetry = 10;
final RMApp rmAppForCheck = rmApp;
GenericTestUtils.waitFor(
new Supplier() {
  @Override
  public Boolean get() {
return new Boolean(rmAppForCheck.getAppAttempts().size() == 4);
  }
},
100, maxRetry);
Assert.assertEquals(RMAppAttemptState.FAILED,
rmApp.getAppAttempts().get(latestAppAttemptId).getAppAttemptState());
{code}

>From the variable names and values, I'm guessing the intent here was to check 
>every 100 milliseconds for 10 total checks (i.e.: a maximum cumulative wait 
>time of one second).  However this code is only going to check once, and if 
>that check fails it will sleep for 100 milliseconds then throw an exception.  
>That is clearly not intended by the caller, otherwise they would skip all this 
>boilerplate and just code up the check directly in their unit test.

Besides the bug in TestRMRestart, it would be useful for 
GenericTestUtils.waitFor to do one last check after the time expired before 
throwing the timeout exception.  At least that would do something semantically 
useful if we remove this precondition check and allow the wait interval to be 
less than the check interval.


> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080580#comment-16080580
 ] 

Hadoop QA commented on HADOOP-14455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 7 new + 291 unchanged 
- 3 fixed = 298 total (was 294) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m  
5s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876414/HADOOP-14455-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f86f5cf50fc1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09653ea |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12753/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 

[jira] [Commented] (HADOOP-14521) KMS client needs retry logic

2017-07-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080574#comment-16080574
 ] 

Rushabh S Shah commented on HADOOP-14521:
-

bq. I tried to ran TestAclsEndToEnd before and after applying this patch on 
branch-2.8. It was passing before, failing after.
HADOOP-14563 fixed the failing test issue.
We deployed this change internally in our cluster and found out the client is 
not retrying on {{addDelegationTokens}} call.
The reason is following piece of code.
{code:title=KMSClientProvider.java|borderStyle=solid}
@Override
  public Token[] addDelegationTokens(final String renewer,
  Credentials credentials) throws IOException {
  
  } catch (Exception e) {
throw new IOException(e);  ---> Catching any Exception and throwing 
IOException.
  }
}
return tokens;
  }
{code}
All the other calls checks whether the exception is {{IOException}} then 
rethrow IOException otherwise wrap the actual exception in IOException and 
rethrow that.
Will put up a new patch including this fix.


> KMS client needs retry logic
> 
>
> Key: HADOOP-14521
> URL: https://issues.apache.org/jira/browse/HADOOP-14521
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14521.09.patch, HDFS-11804-branch-2.8.patch, 
> HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, 
> HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, 
> HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14521) KMS client needs retry logic

2017-07-10 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14521:

Status: Open  (was: Patch Available)

> KMS client needs retry logic
> 
>
> Key: HADOOP-14521
> URL: https://issues.apache.org/jira/browse/HADOOP-14521
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14521.09.patch, HDFS-11804-branch-2.8.patch, 
> HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, 
> HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, 
> HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14629) Improve exception checking in FileContext related JUnit tests

2017-07-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080373#comment-16080373
 ] 

Andras Bokor commented on HADOOP-14629:
---

JUnit test failures are unrelated.

> Improve exception checking in FileContext related JUnit tests
> -
>
> Key: HADOOP-14629
> URL: https://issues.apache.org/jira/browse/HADOOP-14629
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.2
>
> Attachments: HADOOP-14629.01.patch
>
>
> {{FileContextMainOperationsBaseTest#rename}} and 
> {{TestHDFSFileContextMainOperations#rename}} do the same but different way.
> * FileContextMainOperationsBaseTest is able to distingush exceptions
> * TestHDFSFileContextMainOperations checks the files in case of error
> We should use one rename method with both advantages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-07-10 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-14455:
--
Attachment: HADOOP-14455-003.patch

[~vinayrpet] thanks for taking look.

bq.Changing the current behavior straightaway is not correct.
Agreed.
bq.So, a config can be introduced to choose any one among the above, Default 
should be SAME_MOUNTPOINT.
bq.Based on the configuration, validation could vary.
done.

[~ste...@apache.org] thanks for taking a look.
bq. Not just verifying that the code works, but proving information needed to 
debug problems if/when it stops working
Agreed and updated the applicable as you mentioned.
{{ContractTestUtils.assertPathExists()}} and 
{{GenericTestUtils.assertExceptionContains()}}

Uploading the patch,kindly review.

> ViewFileSystem#rename should support be supported within same nameservice 
> with different mountpoints
> 
>
> Key: HADOOP-14455
> URL: https://issues.apache.org/jira/browse/HADOOP-14455
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14455-002.patch, HADOOP-14455-003.patch, 
> HADOOP-14455.patch
>
>
> *Scenario:* 
> || Mount Point || NameService|| Value||
> |/tmp|hacluster|/tmp|
> |/user|hacluster|/user|
> Move file from {{/tmp}} to {{/user}}
> It will fail by throwing the following error
> {noformat}
> Caused by: java.io.IOException: Renames across Mount points not supported
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500)
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14640) Azure: Support affinity for service running on localhost and reuse SPNEGO hadoop.auth cookie for authorization, SASKey and delegation token generation

2017-07-10 Thread Santhosh G Nayak (JIRA)
Santhosh G Nayak created HADOOP-14640:
-

 Summary: Azure: Support affinity for service running on localhost 
and reuse SPNEGO hadoop.auth cookie for authorization, SASKey and delegation 
token generation
 Key: HADOOP-14640
 URL: https://issues.apache.org/jira/browse/HADOOP-14640
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 2.9.0
Reporter: Santhosh G Nayak
Assignee: Santhosh G Nayak


Currently, {{WasbRemoteCallHelper}} can be configured to talk to comma 
separated list of URLs for authorization, SASKey generation and delegation 
token generation.
To improve the performance, if service runs on the local machine, give it first 
preference over the other configured list of URLs. 
Currently, {{WasbRemoteCallHelper}} generates {{hadoop.auth}} cookie for every 
request by talking to the remote service, before making actual rest requests.
The proposal is to reuse the {{hadoop.auth}} cookie for subsequent requests 
from same {{WasbRemoteCallHelper}} object until its expiry time. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14634) Remove jline from main Hadoop pom.xml

2017-07-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080181#comment-16080181
 ] 

Hudson commented on HADOOP-14634:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11982 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11982/])
HADOOP-14634. Remove jline from main Hadoop pom.xml. Contributed by Ray 
(stevel: rev 09653ea098a17fddcf111b0da289085915c351d1)
* (edit) hadoop-project/pom.xml
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* (edit) hadoop-client-modules/hadoop-client/pom.xml


> Remove jline from main Hadoop pom.xml
> -
>
> Key: HADOOP-14634
> URL: https://issues.apache.org/jira/browse/HADOOP-14634
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14634.001.patch
>
>
> A long time ago, HADOOP-9342 removed jline from being included in the Hadoop 
> distribution.  Since then, more modules have added Zookeeper, and are pulling 
> in jline again.
> Recommend excluding jline from the main Hadoop pom in order to prevent 
> subsequent additions of Zookeeper dependencies from doing this again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14634) Remove jline from main Hadoop pom.xml

2017-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080157#comment-16080157
 ] 

Steve Loughran commented on HADOOP-14634:
-

hadn't committed; my bad. Will now

> Remove jline from main Hadoop pom.xml
> -
>
> Key: HADOOP-14634
> URL: https://issues.apache.org/jira/browse/HADOOP-14634
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14634.001.patch
>
>
> A long time ago, HADOOP-9342 removed jline from being included in the Hadoop 
> distribution.  Since then, more modules have added Zookeeper, and are pulling 
> in jline again.
> Recommend excluding jline from the main Hadoop pom in order to prevent 
> subsequent additions of Zookeeper dependencies from doing this again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-07-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080098#comment-16080098
 ] 

Steve Loughran commented on HADOOP-14455:
-

Ignoring the semantics of rename, test wise I think it fails my [well known 
checklist of requirements of 
tests|https://github.com/steveloughran/formality/blob/master/styleguide/styleguide.md],
 namely
* will assertions fail with any form of meaningful diagnostics, or will 
debugging Jenkins faiures be impossible without further patches to the tests. 
Fix: Use {{ContractTestUtils.assertPathExists()}}
* Does it use hard-coded string comparisons when looking for exception text. 
Fix: static constant in production source, reference in test.
* if a caught exception is not the one expected, is that caught exception lost, 
or is it the stack trace and the message propagated. Fix 
{{GenericTestUtils.assertExceptionContains()}}, or, if targeting java 8+ only, 
{{LambdaTestUtils.intercept()}}

Imagine it is 18 months from now, and the test starts failing intermittently on 
Jenkins. Will the tests provide the information needed, or will you (or a 
successor) be left not having any idea what's going on? That's what tests 
should be trying to address. Not just verifying that the code works, but 
proving information needed to debug problems if/when it stops working

> ViewFileSystem#rename should support be supported within same nameservice 
> with different mountpoints
> 
>
> Key: HADOOP-14455
> URL: https://issues.apache.org/jira/browse/HADOOP-14455
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14455-002.patch, HADOOP-14455.patch
>
>
> *Scenario:* 
> || Mount Point || NameService|| Value||
> |/tmp|hacluster|/tmp|
> |/user|hacluster|/user|
> Move file from {{/tmp}} to {{/user}}
> It will fail by throwing the following error
> {noformat}
> Caused by: java.io.IOException: Renames across Mount points not supported
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500)
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-07-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080086#comment-16080086
 ] 

Vinayakumar B commented on HADOOP-14455:


Thanks for reporting and posting patch [~brahmareddy]

There were no historic reasons mentioned or found to allow only one of the 
alternative mentioned in comments. In ViewFileSystem#rename(..)

Changing the current behavior straightaway is not correct.
So it would be better to have a configuration, to keep current behavior intact 
and use the configuration to have desired behavior.

There are 3 rename strategies available as of now. 
1. SAME_FILESYSTEM_ACROSS_MOUNTPOINT
2. SAME_TARGET_URI_ACROSS_MOUNTPOINT
3. SAME_MOUNTPOINT

So, a config can be introduced to choose any one among the above, Default 
should be SAME_MOUNTPOINT.
Based on the configuration, validation could vary.

> ViewFileSystem#rename should support be supported within same nameservice 
> with different mountpoints
> 
>
> Key: HADOOP-14455
> URL: https://issues.apache.org/jira/browse/HADOOP-14455
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14455-002.patch, HADOOP-14455.patch
>
>
> *Scenario:* 
> || Mount Point || NameService|| Value||
> |/tmp|hacluster|/tmp|
> |/user|hacluster|/user|
> Move file from {{/tmp}} to {{/user}}
> It will fail by throwing the following error
> {noformat}
> Caused by: java.io.IOException: Renames across Mount points not supported
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500)
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14639) test YARN log collection works to s3a

2017-07-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14639:
---

 Summary: test YARN log collection works to s3a
 Key: HADOOP-14639
 URL: https://issues.apache.org/jira/browse/HADOOP-14639
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


Extend the s3a+ YARN tests to verify that log collection can use s3a:// URLs as 
a destination



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14635:
--
Attachment: HADOOP-14635-001.patch

> Javadoc correction for AccessControlList#buildACL
> -
>
> Key: HADOOP-14635
> URL: https://issues.apache.org/jira/browse/HADOOP-14635
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14635-001.patch
>
>
> {{AccessControlList#buildACL}} 
> {code}
>   /**
>* Build ACL from the given two Strings.
>* The Strings contain comma separated values.
>*
>* @param aclString build ACL from array of Strings
>*/
>   private void buildACL(String[] userGroupStrings) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080020#comment-16080020
 ] 

Akira Ajisaka commented on HADOOP-14638:


+1

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080007#comment-16080007
 ] 

Hadoop QA commented on HADOOP-14638:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
6s{color} | {color:green} root generated 0 new + 1358 unchanged - 1 fixed = 
1358 total (was 1359) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 26 unchanged - 1 fixed = 27 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14638 |
| GITHUB PR | https://github.com/apache/hadoop/pull/247 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 28612abbcf93 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3de47ab |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12752/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12752/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12752/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12752/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: 

[jira] [Updated] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14638:
---
Status: Patch Available  (was: Open)

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14539) Move commons logging APIs over to slf4j in hadoop-common

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079956#comment-16079956
 ] 

Hadoop QA commented on HADOOP-14539:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 85 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 16s{color} 
| {color:red} root generated 9 new + 1306 unchanged - 53 fixed = 1315 total 
(was 1359) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 51s{color} | {color:orange} hadoop-common-project: The patch generated 102 
new + 5122 unchanged - 60 fixed = 5224 total (was 5182) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.ha.TestShellCommandFencer |
|   | hadoop.security.TestShellBasedUnixGroupsMapping |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14539 |
| GITHUB PR | https://github.com/apache/hadoop/pull/246 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6ba273edbb9 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3de47ab |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12751/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12751/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
| unit | 

[jira] [Updated] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14638:
---
Status: Open  (was: Patch Available)

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14638:
---
Attachment: HADOOP-14638.002.patch

002.patch:
 fix the following checkstyle issue:
{noformat}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java:195:
private static final List delegateMethods = 
Lists.asList("error",:39: Name 'delegateMethods' must match pattern 
'^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName]{noformat}

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch, HADOOP-14638.002.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079941#comment-16079941
 ] 

Akira Ajisaka commented on HADOOP-14638:


LGTM, +1 pending Jenkins.

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079937#comment-16079937
 ] 

Hadoop QA commented on HADOOP-14638:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
9s{color} | {color:green} root generated 0 new + 1358 unchanged - 1 fixed = 
1358 total (was 1359) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 26 unchanged - 1 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14638 |
| GITHUB PR | https://github.com/apache/hadoop/pull/247 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 687bad41c5cf 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3de47ab |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12750/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12750/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12750/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12750/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: 

[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079943#comment-16079943
 ] 

Akira Ajisaka commented on HADOOP-14638:


Would you fix the following checkstyle issue?
{noformat}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java:195:
private static final List delegateMethods = 
Lists.asList("error",:39: Name 'delegateMethods' must match pattern 
'^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName]
{noformat}
I think we can ignore the following issue:
{noformat}
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java:64:
  static Logger LOG = LoggerFactory.getLogger(ShellCommandFencer.class);:17: 
Name 'LOG' must match pattern '^[a-z][a-zA-Z0-9]*$'. [StaticVariableName]
{noformat}
I'm +1 if that is addressed. Thanks!

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14638) Replace commons-logging APIs with slf4j in StreamPumper

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079919#comment-16079919
 ] 

Akira Ajisaka commented on HADOOP-14638:


The test failures is as follows:
{noformat}
org.mockito.exceptions.base.MockitoException: 
Cannot mock/spy class org.slf4j.impl.Log4jLoggerAdapter
Mockito cannot mock/spy following:
  - final classes
  - anonymous classes
  - primitive types
at 
org.apache.hadoop.ha.TestShellCommandFencer.setupLogMock(TestShellCommandFencer.java:52)
{noformat}
org.slf4j.impl.Log4jLoggerAdapter is final, so we cannot spy it. I'll review 
your patch.

> Replace commons-logging APIs with slf4j in StreamPumper
> ---
>
> Key: HADOOP-14638
> URL: https://issues.apache.org/jira/browse/HADOOP-14638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14638.001.patch
>
>
> HADOOP-14539 is big, so we split a separate issue from HADOOP-14539.
> Now StreamPumper only accepts commons-logging logger API, so we should change 
> StreamPumper api to accept slf4j and change the related code including some 
> tests in TestShellCommandFencer that failed with slf4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14044) Synchronization issue in delegation token cancel functionality

2017-07-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079926#comment-16079926
 ] 

Vinayakumar B commented on HADOOP-14044:


This could be branch-2.8 and branch-2.7 candidate as well.

> Synchronization issue in delegation token cancel functionality
> --
>
> Key: HADOOP-14044
> URL: https://issues.apache.org/jira/browse/HADOOP-14044
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: dt_fail.log, dt_success.log, HADOOP-14044-001.patch, 
> HADOOP-14044-002.patch, HADOOP-14044-003.patch
>
>
> We are using Hadoop delegation token authentication functionality in Apache 
> Solr. As part of the integration testing, I found following issue with the 
> delegation token cancelation functionality.
> Consider a setup with 2 Solr servers (S1 and S2) which are configured to use 
> delegation token functionality backed by Zookeeper. Now invoke following 
> steps,
> [Step 1] Send a request to S1 to create a delegation token.
>   (Delegation token DT is created successfully)
> [Step 2] Send a request to cancel DT to S2
>   (DT is canceled successfully. client receives HTTP 200 response)
> [Step 3] Send a request to cancel DT to S2 again
>   (DT cancelation fails. client receives HTTP 404 response)
> [Step 4] Send a request to cancel DT to S1
> At this point we get two different responses.
> - DT cancelation fails. client receives HTTP 404 response
> - DT cancelation succeeds. client receives HTTP 200 response
> Also as per the current implementation, each server maintains an in_memory 
> cache of current tokens which is updated using the ZK watch mechanism. e.g. 
> the ZK watch on S1 will ensure that the in_memory cache is synchronized after 
> step 2.
> After investigation, I found the root cause for this behavior is due to the 
> race condition between step 4 and the firing of ZK watch on S1. Whenever the 
> watch fires before the step 4 - we get HTTP 404 response (as expected). When 
> that is not the case - we get HTTP 200 response along with following ERROR 
> message in the log,
> {noformat}
> Attempted to remove a non-existing znode /ZKDTSMTokensRoot/DT_XYZ
> {noformat}
> From client perspective, the server *should* return HTTP 404 error when the 
> cancel request is sent out for an invalid token.
> Ref: Here is the relevant Solr unit test for reference,
> https://github.com/apache/lucene-solr/blob/746786636404cdb8ce505ed0ed02b8d9144ab6c4/solr/core/src/test/org/apache/solr/cloud/TestSolrCloudWithDelegationTokens.java#L285



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14637) After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() fails with IllegalArgumentException

2017-07-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079904#comment-16079904
 ] 

Daniel Templeton commented on HADOOP-14637:
---

Unit test failures are unrelated.

> After HADOOP-14568, TestRMRestart.testRMRestartWaitForPreviousAMToFinish() 
> fails with IllegalArgumentException
> --
>
> Key: HADOOP-14637
> URL: https://issues.apache.org/jira/browse/HADOOP-14637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: newbie
> Attachments: HADOOP-14637.001.patch, HADOOP-14637.002.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testRMRestartWaitForPreviousAMToFinish(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 23.718 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:341)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org