[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335476#comment-15335476
 ] 

Akira AJISAKA commented on HADOOP-13149:


Thanks Chris for testing the patch. Could you give +1 to this? I agreed with 
you that HADOOP-12899 should be backported as well.

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread binde (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binde updated HADOOP-13192:
---
Attachment: 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch
0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch

fix bug HADOOP-13192. two patchs

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>Assignee: binde
> Attachments: 
> 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, 
> 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335451#comment-15335451
 ] 

Hadoop QA commented on HADOOP-13239:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 12s{color} 
| {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 
10 new + 1 unchanged - 5 fixed = 11 total (was 6) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_101 with JDK v1.7.0_101 
generated 14 new + 1 unchanged - 5 fixed = 15 total (was 6) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 14 new + 
115 unchanged - 13 fixed = 129 total (was 128) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811289/HADOOP-13239-branch-2.001.patch
 |
| JIRA Issue | HADOOP-13239 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32ec446d36dd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335445#comment-15335445
 ] 

Akira AJISAKA commented on HADOOP-9613:
---

Hi [~ozawa], would you update the pull request as well? If there is a pull 
request, the precommit job always runs on pull request instead of the attached 
patch.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Attachment: HADOOP-13239-branch-2.001.patch

The v1 patch is to address {{hasWarnedDeprecation}} checkstyle warning. Others 
are expected as we're deprecating classes.

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13239-branch-2.000.patch, 
> HADOOP-13239-branch-2.001.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335417#comment-15335417
 ] 

Akira AJISAKA commented on HADOOP-13192:


Hi [~zhudebin], I assigned you to this issue.
Please hit "Submit Patch" instead of "Resolve Issue" after you updated the pull 
request.

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>Assignee: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13192:
---
Assignee: binde

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>Assignee: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13192:
---
Status: Patch Available  (was: Reopened)

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reopened HADOOP-13192:


> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335390#comment-15335390
 ] 

Akira AJISAKA commented on HADOOP-12893:


Thanks [~ebadger] for reporting this. This error happens in branch-2.6 as well.
bq. If I move the hadoop-build-tools module ahead of hadoop-project in the 
hadoop/pom.xml file, then it builds fine.
I'm +1 for doing this. May I revert this commit from branch-2.7/2.7.3/2.6 and 
provide patches?
cc: [~andrew.wang], [~leftnoteasy]

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335384#comment-15335384
 ] 

Hadoop QA commented on HADOOP-13239:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 
10 new + 1 unchanged - 5 fixed = 11 total (was 6) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_101 with JDK v1.7.0_101 
generated 14 new + 1 unchanged - 5 fixed = 15 total (was 6) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 15 new + 
115 unchanged - 13 fixed = 130 total (was 128) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811279/HADOOP-13239-branch-2.000.patch
 |
| JIRA Issue | HADOOP-13239 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 44d3f760062f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Attachment: (was: HADOOP-13239-branch-2.000.patch)

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13239-branch-2.000.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Attachment: HADOOP-13239-branch-2.000.patch

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13239-branch-2.000.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Status: Patch Available  (was: Open)

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13239-branch-2.000.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Attachment: HADOOP-13239-branch-2.000.patch

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13239-branch-2.000.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335351#comment-15335351
 ] 

Xiao Chen commented on HADOOP-13255:


Thanks again Xiaoyu for the backports.

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.branch-2.patch, HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335307#comment-15335307
 ] 

Xiaoyu Yao edited comment on HADOOP-13255 at 6/17/16 3:25 AM:
--

Thanks [~xiaochen] for the branch-2 patch. I've commit the patch (with git 
apply --whitespace=fix) to trunk, branch-2, branch-2.8, branch-2.7.3 and 
branch-2.6.5.


was (Author: xyao):
Thanks [~xiaochen] for the branch-2 patch. I've commit the patch to trunk, 
branch-2, branch-2.8, branch-2.7.3 and branch-2.6.5.

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.branch-2.patch, HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13255:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.6.5
   2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen] for the branch-2 patch. I've commit the patch to trunk, 
branch-2, branch-2.8, branch-2.7.3 and branch-2.6.5.

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.branch-2.patch, HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-16 Thread binde (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binde resolved HADOOP-13192.

Resolution: Fixed

add a test case

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-16 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13189:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.5
   Status: Resolved  (was: Patch Available)

I just committed this to trunk and branches 2 through 2.6. Thank you [~redvine].

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Fix For: 2.6.5
>
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003-branch-2.7.patch, HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-16 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13189:
-
Attachment: HADOOP-13189.003-branch-2.7.patch

This is a version of the patch for branch-2.7 and 2.6.
A removed {{testTotalCapacityOfSubQueues()}} because the {{priorityLevels}} 
parameter for {{FairCallQueue()}} is not available.

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003-branch-2.7.patch, HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335234#comment-15335234
 ] 

Hadoop QA commented on HADOOP-13280:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
47s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 45s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_101 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b35c7cd |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811258/HADOOP-13280-branch-2.8.000.patch
 |
| JIRA Issue | HADOOP-13280 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 99767f3927fd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / 04682cc |
| Default Java | 1.7.0_101 |
| 

[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-06-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Status: Patch Available  (was: In Progress)

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, 
> HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-06-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Status: In Progress  (was: Patch Available)

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, 
> HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335203#comment-15335203
 ] 

Hudson commented on HADOOP-13189:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9974 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9974/])
HADOOP-13189. FairCallQueue makes callQueue larger than the configured (shv: 
rev a2a5cb60b09491cb672978ba9442f02373392c67)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java


> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335148#comment-15335148
 ] 

Hadoop QA commented on HADOOP-13255:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
33s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
26s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811245/HADOOP-13255.branch-2.patch

[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Attachment: HADOOP-13280-branch-2.8.000.patch

The patch can not be applied to {{branch-2.8}} cleanly. I prepared a separate 
patch for {{branch-2.8}}, see [^HADOOP-13280-branch-2.8.000.patch].

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335115#comment-15335115
 ] 

Mingliang Liu commented on HADOOP-13280:


My compiler complains the "Incompatible types: int cannot be converted to 
java.lang.Long". Even the simple statement with constant number like {{Long 
longInst = 10;}} gets the same error.

I think the autoboxing and the implicit typecast can not happen at the same 
time automatically according to JSL?

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335098#comment-15335098
 ] 

Hadoop QA commented on HADOOP-13242:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811253/HADOOP-13242-006.patch
 |
| JIRA Issue | HADOOP-13242 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 0222f67b609a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bf78040 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9807/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9807/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> 

[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: HADOOP-13242-006.patch

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, 
> HADOOP-13242-005.patch, HADOOP-13242-006.patch, HDFS-10462-001.patch, 
> HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13284) FileSystemStorageStatistics must not attempt to read non-existent rack-aware read stats in branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335049#comment-15335049
 ] 

Mingliang Liu commented on HADOOP-13284:


Thanks [~hitesh] for reporting this. (I was not able to change the Reporter in 
this JIRA)

Thanks [~cmccabe] for prompt rereview and commit.

> FileSystemStorageStatistics must not attempt to read non-existent rack-aware 
> read stats in branch-2.8
> -
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335051#comment-15335051
 ] 

Hudson commented on HADOOP-12975:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9973 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9973/])
HADOOP-12975. Add jitter to CachingGetSpaceUsed's thread (Elliott Clark 
(cmccabe: rev bf780406f2b30e627bdf36ac07973f6931f81106)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
* hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/WindowsGetSpaceUsed.java


> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12975:
--
Affects Version/s: (was: 2.9.0)
   2.8.0

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12975:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335017#comment-15335017
 ] 

Colin Patrick McCabe commented on HADOOP-12975:
---

I was just adding jitter to the commit date.

+1.  Thanks, [~eclark].

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13255:
---
Attachment: HADOOP-13255.branch-2.patch

Thanks [~xyao].
I tried with the directory based minikdc, even if I set the 
{{MIN_TICKET_LIFETIME}}, it ends up with this error if max lifetime is less 
than 6 mins, which I think is what Zhe met in HADOOP-12559.
{noformat}
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Requested start 
time is later than end time (11) - Requested start time is later than end time)

at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:554)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:659)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$7.call(LoadBalancingKMSClientProvider.java:235)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$7.call(LoadBalancingKMSClientProvider.java:232)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getKeys(LoadBalancingKMSClientProvider.java:232)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$17$1.run(TestKMS.java:2097)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$17$1.run(TestKMS.java:2091)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1744)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$17.call(TestKMS.java:2091)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$17.call(TestKMS.java:2081)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:141)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:123)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.testTGTRenewal(TestKMS.java:2081)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Requested start 
time is later than end time (11) - Requested start time is later than end time)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:333)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:203)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:149)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:545)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:540)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1744)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:540)
... 26 more
Caused by: GSSException: No valid credentials provided (Mechanism level: 
Requested start time is later than end time (11) - Requested start time is 
later than end time)
at 
sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:710)
at 

[jira] [Updated] (HADOOP-13284) FileSystemStorageStatistics must not attempt to read non-existent rack-aware read stats in branch-2.8

2016-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13284:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> FileSystemStorageStatistics must not attempt to read non-existent rack-aware 
> read stats in branch-2.8
> -
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13284) FileSystemStorageStatistics must not attempt to read non-existent rack-aware read stats in branch-2.8

2016-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13284:
--
Summary: FileSystemStorageStatistics must not attempt to read non-existent 
rack-aware read stats in branch-2.8  (was: Remove the rack-aware read stats in 
FileSystemStorageStatistics from branch-2.8)

> FileSystemStorageStatistics must not attempt to read non-existent rack-aware 
> read stats in branch-2.8
> -
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334989#comment-15334989
 ] 

Colin Patrick McCabe commented on HADOOP-13284:
---

Thanks for spotting this, [~liuml07].  Good find.

+1, will commit to 2.8 shortly.

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334982#comment-15334982
 ] 

Colin Patrick McCabe commented on HADOOP-13280:
---

Thanks for the patch, [~liuml07].  You are right that it should be readOps + 
largeReadOps.  It's great to have a test as well.

{code}
  return (long) (data.getReadOps() + data.getLargeReadOps());
{code}
Do we need the typecast here?  Seems like it shouldn't be required since the 
int should be promoted to a long automatically.  +1 once that's addressed.

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334877#comment-15334877
 ] 

Xiaoyu Yao commented on HADOOP-13255:
-

Thanks [~xiaochen] for the contribution and [~zhz] for the review. I'v 
committed the patch to trunk. 

The unit test needs some additional work for branch-2 and branch-2.8/2.7 as 
kerby is not available in branch-2. if it is not feasible without Kerby, I'm OK 
to have a separate patch for branch-2 without unit test. Let me know your 
thoughts [~xiaochen] and [~zhz]. 

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334865#comment-15334865
 ] 

Hadoop QA commented on HADOOP-13285:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 25 unchanged - 3 fixed = 25 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811227/HADOOP-13285.00.patch 
|
| JIRA Issue | HADOOP-13285 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b16481d3b8de 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 127d2c7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9804/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9804/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
> -
>
> Key: HADOOP-13285
> URL: https://issues.apache.org/jira/browse/HADOOP-13285
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13285.00.patch
>
>
> HADOOP-13197 added non-decayed call metrics in 

[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334858#comment-15334858
 ] 

Hadoop QA commented on HADOOP-13242:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811235/HADOOP-13242-005.patch
 |
| JIRA Issue | HADOOP-13242 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux d985b813fe79 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b1674ca |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9805/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9805/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9805/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> 

[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334849#comment-15334849
 ] 

Xiao Chen commented on HADOOP-13255:


Thanks a lot [~xyao] and [~zhz]!

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334843#comment-15334843
 ] 

Elliott Clark commented on HADOOP-12975:


Ping?

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-06-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334844#comment-15334844
 ] 

Elliott Clark commented on HADOOP-12974:


Ping?

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334841#comment-15334841
 ] 

Hudson commented on HADOOP-13255:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9972 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9972/])
HADOOP-13255. KMSClientProvider should check and renew tgt when doing (xyao: 
rev b1674caa409ca2c616207acb72aeb2767d28b10c)
* 
hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: HADOOP-13242-005.patch

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, 
> HADOOP-13242-005.patch, HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary

2016-06-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13285:

Status: Patch Available  (was: Open)

> DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
> -
>
> Key: HADOOP-13285
> URL: https://issues.apache.org/jira/browse/HADOOP-13285
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13285.00.patch
>
>
> HADOOP-13197 added non-decayed call metrics in metrics2 source for 
> DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected 
> unexpectedly to include both decayed and non-decayed call volume. The root 
> cause is Jackson ObjectMapper simply serialize all the content of the 
> callCounts map which contains both non-decayed and decayed counter after 
> HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to 
> include only decayed call volume for backward compatibility and add unit test 
> for DecayRpcScheduler MXBean to catch this in future. 
> CallVolumeSummary JMX example before HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}"
> {code}
>  CallVolumeSummary JMX example after HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hrt_qa\":[1,2]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary

2016-06-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13285:

Attachment: HADOOP-13285.00.patch

> DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
> -
>
> Key: HADOOP-13285
> URL: https://issues.apache.org/jira/browse/HADOOP-13285
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13285.00.patch
>
>
> HADOOP-13197 added non-decayed call metrics in metrics2 source for 
> DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected 
> unexpectedly to include both decayed and non-decayed call volume. The root 
> cause is Jackson ObjectMapper simply serialize all the content of the 
> callCounts map which contains both non-decayed and decayed counter after 
> HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to 
> include only decayed call volume for backward compatibility and add unit test 
> for DecayRpcScheduler MXBean to catch this in future. 
> CallVolumeSummary JMX example before HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}"
> {code}
>  CallVolumeSummary JMX example after HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hrt_qa\":[1,2]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary

2016-06-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-10539 to HADOOP-13285:


Component/s: (was: ipc)
 ipc
Key: HADOOP-13285  (was: HDFS-10539)
Project: Hadoop Common  (was: Hadoop HDFS)

> DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
> -
>
> Key: HADOOP-13285
> URL: https://issues.apache.org/jira/browse/HADOOP-13285
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>
> HADOOP-13197 added non-decayed call metrics in metrics2 source for 
> DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected 
> unexpectedly to include both decayed and non-decayed call volume. The root 
> cause is Jackson ObjectMapper simply serialize all the content of the 
> callCounts map which contains both non-decayed and decayed counter after 
> HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to 
> include only decayed call volume for backward compatibility and add unit test 
> for DecayRpcScheduler MXBean to catch this in future. 
> CallVolumeSummary JMX example before HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}"
> {code}
>  CallVolumeSummary JMX example after HADOOP-13197
> {code}
> "CallVolumeSummary" : "{\"hrt_qa\":[1,2]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334764#comment-15334764
 ] 

Hadoop QA commented on HADOOP-13284:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
53s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b35c7cd |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811205/HADOOP-13284-branch-2.8.000.patch
 |
| JIRA Issue | HADOOP-13284 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0018bcce695b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / 605443c |
| 

[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-16 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334734#comment-15334734
 ] 

Stephen O'Donnell commented on HADOOP-13263:


[~arpitagarwal] I have attached a version 2 of the patch, which has:

1. Changes based on review - please check them and make sure they are what you 
were thinking
2. A few extra tests
3. Counters for the 4 metrics - queued, running, success, exception. If you had 
something different in mind for the counters let me know and I can change them. 
Once we know they are good I can add a couple of tests to cover that part off 
too.


> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-16 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-13263:
---
Attachment: HADOOP-13263.002.patch

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334707#comment-15334707
 ] 

Chris Nauroth commented on HADOOP-13242:


Thank you to Checkstyle for making a significant catch here:

{code}
  private String OAUTH_CLIENT_ID_KEY = "dfs.webhdfs.oauth2.client.id";
  private String OAUTH_REFRESH_URL_KEY = "dfs.webhdfs.oauth2.refresh.url";
{code}

Checkstyle is reporting that the variable names don't fit the coding standard, 
but really the problem is that these should be declared as constants.  For each 
of these, please change {{private String}} to {{private static final String}}.

Let's also remove the import of {{org.apache.hadoop.hdfs.web.oauth2.Utils}}, 
because it resides in the same package.

I expect that will be the last round of nitpicks, and one more revision to 
address the above should finish it.

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, 
> HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334693#comment-15334693
 ] 

Mingliang Liu commented on HADOOP-13284:


The v0 patch simply removes the bytesReadLocalHost, 
bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
bytesReadDistanceOfFiveOrLarger stats from 
{{FileSystemStorageStatistics#KEYS}}. We do have a unit test to add in 
[HADOOP-13280] for testing FileSystemStorageStatistic.

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334684#comment-15334684
 ] 

Hadoop QA commented on HADOOP-13242:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-tools/hadoop-azure-datalake: The patch generated 
3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811204/HADOOP-13242-004.patch
 |
| JIRA Issue | HADOOP-13242 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 3dd418d3694b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 127d2c7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9803/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure-datalake.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9803/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9803/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9803/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13242:
---
Status: Patch Available  (was: Open)

[~ASikaria], patch 004 looks good to me.  I built the site locally and reviewed 
the content.  I confirmed that the contract tests work with those instructions. 
 I am submitting the patch for a pre-commit build validation.

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, 
> HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13284:
---
Fix Version/s: (was: 2.8.0)

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13284:
---
Attachment: HADOOP-13284-branch-2.8.000.patch

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13284:
---
Status: Patch Available  (was: Open)

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13284-branch-2.8.000.patch
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: HADOOP-13242-004.patch

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, 
> HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334632#comment-15334632
 ] 

Mingliang Liu commented on HADOOP-13284:


This is an innocent omission when committing the [HADOOP-13065] patch as the 
original patch was for trunk branch. I should have prepared a branch-2.8 patch.

Please note that this should only goes into {{branch-2.8}}. The {{branch-2}} 
has [HDFS-9579] and it should be fine.

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2.8

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13284:
---
Summary: Remove the rack-aware read stats in FileSystemStorageStatistics 
from branch-2.8  (was: Remove the rack-aware read stats in 
FileSystemStorageStatistics from branch-2)

> Remove the rack-aware read stats in FileSystemStorageStatistics from 
> branch-2.8
> ---
>
> Key: HADOOP-13284
> URL: https://issues.apache.org/jira/browse/HADOOP-13284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> As [HDFS-9579] was not committed to {{branch-2.8}}, 
> {{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
> stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
> bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
> bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
> when traversing. See detailed exception stack as following (it happens when 
> Tez uses the new FileSystemStorageStatistics).
> {code}
> 2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
> Cleared TezProcessorContextImpl related information
> 2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
> RunnerCallable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
> to AM, request={  containerId=container_1466028486194_0005_01_02, 
> requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
> taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
> {code}
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13284) Remove the rack-aware read stats in FileSystemStorageStatistics from branch-2

2016-06-16 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13284:
--

 Summary: Remove the rack-aware read stats in 
FileSystemStorageStatistics from branch-2
 Key: HADOOP-13284
 URL: https://issues.apache.org/jira/browse/HADOOP-13284
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


As [HDFS-9579] was not committed to {{branch-2.8}}, 
{{FileSystemStorageStatistics#KEYS}} should not include those rack aware read 
stats brought by [HDFS-9579], including {{bytesReadLocalHost, 
bytesReadDistanceOfOneOrTwo, bytesReadDistanceOfThreeOrFour, 
bytesReadDistanceOfFiveOrLarger}}. Or else, the long iterator will throw NPE 
when traversing. See detailed exception stack as following (it happens when Tez 
uses the new FileSystemStorageStatistics).

{code}
2016-06-15 15:56:59,242 [DEBUG] [TezChild] |impl.TezProcessorContextImpl|: 
Cleared TezProcessorContextImpl related information
2016-06-15 15:56:59,243 [WARN] [main] |task.TezTaskRunner2|: Exception from 
RunnerCallable
java.lang.NullPointerException
at 
org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:74)
at 
org.apache.hadoop.fs.FileSystemStorageStatistics$LongStatisticIterator.next(FileSystemStorageStatistics.java:51)
at 
org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:51)
at 
org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
at 
org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-06-15 15:56:59,245 [DEBUG] [main] |task.TaskReporter|: Sending heartbeat 
to AM, request={  containerId=container_1466028486194_0005_01_02, 
requestId=10, startIndex=0, preRoutedStartIndex=1, maxEventsToGet=500, 
taskAttemptId=attempt_1466028486194_0005_1_00_00_0, eventCount=4 }
{code}

Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334625#comment-15334625
 ] 

Zhe Zhang commented on HADOOP-13255:


Thanks Xiao for the revs and Xiaoyu for the review. The v05 patch LGTM as well.

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334611#comment-15334611
 ] 

Chris Nauroth commented on HADOOP-13242:


[~ASikaria], there are some invalid characters in patch 003.  For example:

bq. Under �Browse�, look for Active Directory and click on it.

Maybe your editor was configured to save in an unexpected encoding?  It's 
simplest if the patch can stick to plain ASCII if there isn't a compelling need 
for multi-byte characters.

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HDFS-10462-001.patch, 
> HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-06-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13283:
---
Description: 
Applications may reuse the file system object across jobs and its storage 
statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
reset and [HADOOP-13032] needs to keep that use case valid.

This jira is for supporting reset operations for storage statistics.

Thanks [~hitesh] for reporting this.

  was:
Applications may reuse the file system object across jobs and its storage 
statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
reset and [HADOOP-13032] needs to keep that use case valid.

This jira is for supporting reset operations for storage statistics.


> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: HADOOP-13242-003.patch

Updated with changes to index.md, as requested by Chris Nauroth. Also removed 
trailing spaces in other files.

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13242-003.patch, HDFS-10462-001.patch, 
> HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-06-16 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13283:
--

 Summary: Support reset operation for new global storage statistics 
and per FS storage stats
 Key: HADOOP-13283
 URL: https://issues.apache.org/jira/browse/HADOOP-13283
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Applications may reuse the file system object across jobs and its storage 
statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
reset and [HADOOP-13032] needs to keep that use case valid.

This jira is for supporting reset operations for storage statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: (was: index.md)

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-16 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13242:
--
Attachment: index.md

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch, index.md
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2016-06-16 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13282:
---

 Summary: S3 blob etags to be made visible in 
status/getFileChecksum() calls
 Key: HADOOP-13282
 URL: https://issues.apache.org/jira/browse/HADOOP-13282
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Priority: Minor


If the etags of blobs were exported via {{getFileChecksum() }}, it'd be 
possible to probe for a blob being in sync with a local file. Distcp could use 
this to decide whether to skip a file or not.

Now, there's a problem there: distcp needs source and dest filesystems to 
implement the same algorithm. It'd only work out the box if you were copying 
between S3 instances. There are also quirks with encryption and multipart: [s3 
docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
 At the very least, it's something which could be used when indexing the FS, to 
check for changes later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-06-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334542#comment-15334542
 ] 

Steve Loughran commented on HADOOP-12804:
-

Larry, can you submit this against branch-2, with a -branch-2-xyx.patch marker 
so that yetus builds it there? And remove that log statement. thx

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Fix Version/s: (was: 3.0.0-alpha1)
   (was: 2.9.0)
   2.8.0

pulled the fix into Hadoop 2.8 as well; changed the "fixed for" version marker 
appropriately.

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-06-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334510#comment-15334510
 ] 

Steve Loughran commented on HADOOP-13278:
-

..or 
-an FS could have some parent limit which it wouldn't try to recurse up beyond.
-the special case of permission denied is somehow recognised as an IAM issue 
and skipped. Problem: how to do that?

> S3AFileSystem mkdirs does not need to validate parent path components
> -
>
> Key: HADOOP-13278
> URL: https://issues.apache.org/jira/browse/HADOOP-13278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools
>Reporter: Adrian Petrescu
>Priority: Minor
>
> According to S3 semantics, there is no conflict if a bucket contains a key 
> named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
> after all, nothing but prefixes.
> However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
> traverse every parent path component for the directory it's trying to create, 
> making sure there's no file with that name. This is suboptimal for three main 
> reasons:
>  * Wasted API calls, since the client is getting metadata for each path 
> component 
>  * This can cause *major* problems with buckets whose permissions are being 
> managed by IAM, where access may not be granted to the root bucket, but only 
> to some prefix. When you call {{mkdirs}}, even on a prefix that you have 
> access to, the traversal up the path will cause you to eventually hit the 
> root bucket, which will fail with a 403 - even though the directory creation 
> call would have succeeded.
>  * Some people might actually have a file that matches some other file's 
> prefix... I can't see why they would want to do that, but it's not against 
> S3's rules.
> I've opened a pull request with a simple patch that just removes this portion 
> of the check. I have tested it with my team's instance of Spark + Luigi, and 
> can confirm it works, and resolves the aforementioned permissions issue for a 
> bucket on which we only had prefix access.
> This is my first ticket/pull request against Hadoop, so let me know if I'm 
> not following some convention properly :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-16 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334497#comment-15334497
 ] 

Eric Badger commented on HADOOP-12893:
--

[~ajisakaa], [~aw], [~xiaochen], branch-2.7 is failing to build for me after 
this patch was committed. 

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hadoop-project: Resources archive cannot be found. Failure to find 
org.apache.hadoop:hadoop-build-tools:jar:2.7.4-SNAPSHOT in 
http://weakdinner:8081/nexus/content/groups/public was cached in the local 
repository, resolution will not be reattempted until the update interval of 
nexus has elapsed or updates are forced
{noformat}

If I move the hadoop-build-tools module ahead of hadoop-project in the 
hadoop/pom.xml file, then it builds fine. So, it looks as if there's a missed 
dependency in hadoop-project and hadoop-build-tools isn't available when it's 
required. The build also succeeds if you mvn install in the hadoop-build-tools 
directory before doing a build from the top level of hadoop. 

{noformat}
107   
108 hadoop-project
109 hadoop-project-dist
110 hadoop-assemblies
111 hadoop-maven-plugins
112 hadoop-common-project
113 hadoop-hdfs-project
114 hadoop-yarn-project
115 hadoop-mapreduce-project
116 hadoop-tools
117 hadoop-dist
118 hadoop-client
119 hadoop-minicluster
120 hadoop-build-tools
121   
{noformat}

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334438#comment-15334438
 ] 

Yufei Gu commented on HADOOP-13254:
---

The test failure is unrelated.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch, HADOOP-13254.005.patch, 
> HADOOP-13254.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334434#comment-15334434
 ] 

Xiaoyu Yao commented on HADOOP-13255:
-

Thanks [~xiaochen] for the clarification. The change makes sense to me after 
recheck the code with the stack attached. +1 for the v05 patch. 
I will commit it by EOD today in case [~zhz] and others have additional 
comments. 

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334423#comment-15334423
 ] 

Mingliang Liu commented on HADOOP-13280:


[~cmccabe] adn [~jnp] Would you kindly review the patch (and proposal)? Thanks.

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334408#comment-15334408
 ] 

Hudson commented on HADOOP-3733:


SUCCESS: Integrated in Hadoop-trunk-Commit #9971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9971/])
HADOOP-3733. "s3x:" URLs break when Secret Key contains a slash, even if 
(raviprak: rev 4aefe119a0203c03cdc893dcb3330fd37f26f0ee)
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ACredentialsInURL.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/S3xLoginHelper.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/TestS3FileSystem.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AConfiguration.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestS3xLoginHelper.java


> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13241) document s3a better

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334409#comment-15334409
 ] 

Hudson commented on HADOOP-13241:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9971/])
HADOOP-13241. document s3a better. Contributed by Steve Loughran. (cnauroth: 
rev 127d2c7281917f23bce17afa6098a2d678a16441)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> document s3a better
> ---
>
> Key: HADOOP-13241
> URL: https://issues.apache.org/jira/browse/HADOOP-13241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13241-branch-2-001.patch, 
> HADOOP-13241-branch-2-002.patch, HADOOP-13241-branch-2-003.patch, 
> HADOOP-13241-branch-2-004.patch
>
>
> s3a can be documented better, things like classpath, troubleshooting, etc.
> sit down and do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334398#comment-15334398
 ] 

Hadoop QA commented on HADOOP-13254:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811159/HADOOP-13254.006.patch
 |
| JIRA Issue | HADOOP-13254 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ddb35e0304ef 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e983eaf |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9801/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9801/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9801/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>

[jira] [Updated] (HADOOP-13241) document s3a better

2016-06-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13241:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch 004.  I have committed this to trunk, branch-2 and branch-2.8.  
Steve, thank you for adding these details to the documentation.

> document s3a better
> ---
>
> Key: HADOOP-13241
> URL: https://issues.apache.org/jira/browse/HADOOP-13241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13241-branch-2-001.patch, 
> HADOOP-13241-branch-2-002.patch, HADOOP-13241-branch-2-003.patch, 
> HADOOP-13241-branch-2-004.patch
>
>
> s3a can be documented better, things like classpath, troubleshooting, etc.
> sit down and do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334375#comment-15334375
 ] 

Chris Nauroth commented on HADOOP-3733:
---

[~raviprak], thank you very much for the thorough code review and testing.

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-3733:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha1
  2.9.0
Target Version/s:   (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

Thanks a lot everyone for your contributions on this long-standing issue. I'm 
glad we could close it out thanks to Steve!

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334359#comment-15334359
 ] 

Chris Nauroth commented on HADOOP-13203:


Steve, thanks for posting the details of the instrumentation.  I think the key 
point there is \{streamOpened=1584\} vs. \{streamOpened=1\}.  If the Spark test 
only triggered 1 streamOpened (without this patch), then it must have been a 
full forward-scan usage pattern.  This matches with my earlier observation 
about {{TestS3AInputStreamPerformance#testReadAheadDefault}}, so for future 
testing, that's a test case we can focus on decoupled from Spark.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334355#comment-15334355
 ] 

Ravi Prakash commented on HADOOP-3733:
--

Thanks a lot Steve! +1 . Will commit to trunk and branch-2 shortly!

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334311#comment-15334311
 ] 

Steve Loughran commented on HADOOP-13203:
-

-1

I accept your contention that it works well for your benchmark. However it does 
so at the expense of being pathologically bad for anything trying to read a 
file in different ways. I can demonstrate this with the log for one of my 
SPARK-7481 runs. Essentially, to read a 20MB .csv.gz file has gone from < 20s 
to about 6 minutes: 20x slower.

{code}

2016-06-16 18:34:21,350 INFO  scheduler.DAGScheduler 
(Logging.scala:logInfo(58)) - Job 0 finished: count at S3LineCount.scala:99, 
took 350.460510 s
2016-06-16 18:34:21,355 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- Duration of  count s3a://landsat-pds/scene_list.gz = 350,666,373,013 ns
2016-06-16 18:34:21,355 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- line count = 514524
2016-06-16 18:34:21,356 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- File System = S3AFileSystem{uri=s3a://landsat-pds, 
workingDir=s3a://landsat-pds/user/stevel, partSize=5242880, 
enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, 
blockSize=1048576, multiPartThreshold=5242880, statistics {22626526 bytes read, 
0 bytes written, 3 read ops, 0 large read ops, 0 write ops}, metrics 
{{Context=S3AFileSystem} 
{FileSystemId=03e96d8b-c5d4-4b3c-8b9d-04931588912b-landsat-pds} 
{fsURI=s3a://landsat-pds/scene_list.gz} {files_created=0} {files_copied=0} 
{files_copied_bytes=0} {files_deleted=0} {directories_created=0} 
{directories_deleted=0} {ignored_errors=1} {invocations_copyfromlocalfile=0} 
{invocations_exists=0} {invocations_getfilestatus=3} {invocations_globstatus=1} 
{invocations_is_directory=0} {invocations_is_file=0} {invocations_listfiles=0} 
{invocations_listlocatedstatus=0} {invocations_liststatus=0} 
{invocations_mdkirs=0} {invocations_rename=0} {object_copy_requests=0} 
{object_delete_requests=0} {object_list_requests=0} 
{object_metadata_requests=3} {object_multipart_aborted=0} {object_put_bytes=0} 
{object_put_requests=0} {streamReadOperations=1584} 
{streamForwardSeekOperations=0} {streamBytesRead=22626526} 
{streamSeekOperations=0} {streamReadExceptions=0} {streamOpened=1584} 
{streamReadOperationsIncomplete=1584} {streamAborted=0} 
{streamReadFullyOperations=0} {streamClosed=1584} {streamBytesSkippedOnSeek=0} 
{streamCloseOperations=1584} {streamBytesBackwardsOnSeek=0} 
{streamBackwardSeekOperations=0} }}

{code}

And without the patch

{code}
2016-06-16 18:37:55,688 INFO  scheduler.DAGScheduler 
(Logging.scala:logInfo(58)) - Job 0 finished: count at S3LineCount.scala:99, 
took 15.853566 s
2016-06-16 18:37:55,693 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- Duration of  count s3a://landsat-pds/scene_list.gz = 16,143,975,760 ns
2016-06-16 18:37:55,693 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- line count = 514524
2016-06-16 18:37:55,694 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- File System = S3AFileSystem{uri=s3a://landsat-pds, 
workingDir=s3a://landsat-pds/user/stevel, partSize=5242880, 
enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, 
blockSize=1048576, multiPartThreshold=5242880, statistics {22626526 bytes read, 
0 bytes written, 3 read ops, 0 large read ops, 0 write ops}, metrics 
{{Context=S3AFileSystem} 
{FileSystemId=96650849-6e33-441f-a976-e74443239ad6-landsat-pds} 
{fsURI=s3a://landsat-pds/scene_list.gz} {files_created=0} {files_copied=0} 
{files_copied_bytes=0} {files_deleted=0} {directories_created=0} 
{directories_deleted=0} {ignored_errors=1} {invocations_copyfromlocalfile=0} 
{invocations_exists=0} {invocations_getfilestatus=3} {invocations_globstatus=1} 
{invocations_is_directory=0} {invocations_is_file=0} {invocations_listfiles=0} 
{invocations_listlocatedstatus=0} {invocations_liststatus=0} 
{invocations_mdkirs=0} {invocations_rename=0} {object_copy_requests=0} 
{object_delete_requests=0} {object_list_requests=0} 
{object_metadata_requests=3} {object_multipart_aborted=0} {object_put_bytes=0} 
{object_put_requests=0} {streamReadOperations=2601} 
{streamForwardSeekOperations=0} {streamBytesRead=22626526} 
{streamSeekOperations=0} {streamReadExceptions=0} {streamOpened=1} 
{streamReadOperationsIncomplete=2601} {streamAborted=0} 
{streamReadFullyOperations=0} {streamClosed=1} {streamBytesSkippedOnSeek=0} 
{streamCloseOperations=1} {streamBytesBackwardsOnSeek=0} 
{streamBackwardSeekOperations=0} }}
2016-06-16 18:37:55,694 INFO  examples.S3LineCount (Logging.scala:logInfo(58)) 
- Stopping Spark Context
{code}


The test, I believe, simply reads in the whole file: no seeks, no skipping. I 
see the no of stream opened calls has gone from 1 to 1584. I suspect that is 
what's at play here

I think this code needs what I suggested, some block mechanism which works with 
open read() calls along with read operations where 

[jira] [Updated] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HADOOP-13254:
--
Attachment: HADOOP-13254.006.patch

Patch 006 is uploaded to separate test in TestDiskValidatorFactory to avoid a 
potential error.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch, HADOOP-13254.005.patch, 
> HADOOP-13254.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13281) S3A to track multipart upload count, size, duration

2016-06-16 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13281:
---

 Summary: S3A to track multipart upload count, size, duration
 Key: HADOOP-13281
 URL: https://issues.apache.org/jira/browse/HADOOP-13281
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


The S3A OutputStream can do multipart uploads. This should be trackable with 
metrics, such as

* total number of uploads
* current number in transfers progress.
* total time for uploads
* total bytes
* bandwidth can be inferred from bytes/time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334152#comment-15334152
 ] 

Hadoop QA commented on HADOOP-13254:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811142/HADOOP-13254.005.patch
 |
| JIRA Issue | HADOOP-13254 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a290235827a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e14ee0d |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9800/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9800/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9800/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>

[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334144#comment-15334144
 ] 

Hudson commented on HADOOP-12875:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9969 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9969/])
HADOOP-12875. [Azure Data Lake] Support for contract test and unit test 
(cnauroth: rev c9e71382a58b6ffcb3fccb79d3c146877f1c8313)
* hadoop-tools/hadoop-azure-datalake/src/test/resources/adls.xml
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractOpenLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/common/AdlMockWebServer.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/hdfs/web/TestSplitSizeCalculation.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/AdlStorageConfiguration.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractDeleteLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractConcatLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/hdfs/web/TestAdlRead.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractAppendLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlWebHdfsFileContextCreateMkdirLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractRenameLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/hdfs/web/TestConcurrentDataReadOperations.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractMkdirLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/AdlStorageContract.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlReadLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractRootDirLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/resources/contract-test-options.xml
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlWebHdfsFileContextMainOperationsLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractCreateLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/hdfs/web/TestConfigurationSetting.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestableAdlFileSystem.java
* 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLakeFileSystem.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlDifferentSizeWritesLive.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/oauth2/TestCachedRefreshTokenBasedAccessTokenProvider.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/common/ExpectedResponse.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestADLResponseData.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/common/TestDataForRead.java
* 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractSeekLive.java


> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-06-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12875:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

[~vishwajeet.dusane], thank you for pointing out the Findbugs misreporting.  I 
confirmed manually that this patch clears the Findbugs warning.

+1, and committed to trunk.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-06-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334120#comment-15334120
 ] 

Chris Nauroth commented on HADOOP-13278:


[~apetresc], I admit I hadn't considered interaction with IAM policies before, 
but I definitely see how this could be useful, and it's interesting to think 
about it.  Unfortunately, I don't see a viable way to satisfy the full range of 
possible authorization requirements that users have come to expect from a file 
system.

For the specific case that we started talking about here (walking up the 
ancestry to verify that there are no pre-existing files), it might work if that 
policy was changed slightly, so that the user was granted full access to 
/a/b/c/\*, and also granted read-only access to /\*.  I expect read access 
would be sufficient for the ancestry-checking logic.  Of course, if you also 
want to block read access to /, then this policy wouldn't satisfy the 
requirement.  It would only block write access on /.

Another consideration is handling of what we call a "fake directory", which is 
a pure metadata object used to indicate the presence of an empty directory.  
For example, consider an administrator allocating a bucket, bootstrapping the 
initial /a/b/c directory structure by running mkdir, and then applying the 
policy I described above.  At this point, S3A has persisted /a/b/c to the 
bucket as what we call a "fake directory", which is a pure metadata object that 
indicates the presence of an empty directory.  After the first file put, say 
/a/b/c/d, S3A no longer needs that pure metadata object to indicate the 
presence of the directory.  Instead, the directory exists implicitly via the 
existence of the file /a/b/c/d.  At that point, S3A would clean up the fake 
directory by deleting /a/b/c.  That implies the user would need to be granted 
delete access to /a/b/c itself, not just /a/b/c/*.  Now if we further consider 
the user deleting /a/b/c/d after that, then S3A needs to recreate the fake 
directory at /a/b/c, so the user is going to need put access on /a/b/c.

bq. Is this correct? If so, I'm not sure a separate issue is needed; the use 
case would simply be unsupported and I'll have to move my S3A filesystem to a 
bucket that grants Hadoop/Spark root access.

Definitely the typical usage is to dedicate the whole bucket to persistence of 
a single S3A file system, with the understanding of the authorization 
limitations that come with that.  Anyone who has credentials to access the 
bucket effectively has full access to that whole file system.  This is a known 
limitation, and it's common to other object store file systems like WASB too.  
I'm not aware of anyone trying to use IAM policies to restrict access to a 
sub-tree.  Certainly it's not something we actively test within the project 
right now, so in that sense, it's unsupported and you'd be treading new ground.

> S3AFileSystem mkdirs does not need to validate parent path components
> -
>
> Key: HADOOP-13278
> URL: https://issues.apache.org/jira/browse/HADOOP-13278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools
>Reporter: Adrian Petrescu
>Priority: Minor
>
> According to S3 semantics, there is no conflict if a bucket contains a key 
> named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
> after all, nothing but prefixes.
> However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
> traverse every parent path component for the directory it's trying to create, 
> making sure there's no file with that name. This is suboptimal for three main 
> reasons:
>  * Wasted API calls, since the client is getting metadata for each path 
> component 
>  * This can cause *major* problems with buckets whose permissions are being 
> managed by IAM, where access may not be granted to the root bucket, but only 
> to some prefix. When you call {{mkdirs}}, even on a prefix that you have 
> access to, the traversal up the path will cause you to eventually hit the 
> root bucket, which will fail with a 403 - even though the directory creation 
> call would have succeeded.
>  * Some people might actually have a file that matches some other file's 
> prefix... I can't see why they would want to do that, but it's not against 
> S3's rules.
> I've opened a pull request with a simple patch that just removes this portion 
> of the check. I have tested it with my team's instance of Spark + Luigi, and 
> can confirm it works, and resolves the aforementioned permissions issue for a 
> bucket on which we only had prefix access.
> This is my first ticket/pull request against Hadoop, so let me know if I'm 
> not following some convention properly :)



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HADOOP-13254:
--
Attachment: HADOOP-13254.005.patch

Thanks [~templedf] for the detailed review. I uploaded the patch 005 for all 
the comments.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch, HADOOP-13254.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Patch Available  (was: Open)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Attachment: HADOOP-3733-branch-2-007.patch

Patch 007
Ravi's patch + javadocs.

Tested against s3 ireland, and on the command line. It's notable that the 
secrets end up everywhere, such as in the output of the {{hadoop fs -ls}} 
command. Putting AWS login details in URLs is just wrong.

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Open  (was: Patch Available)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333911#comment-15333911
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-9613 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-9613 |
| GITHUB PR | https://github.com/apache/hadoop/pull/76 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9798/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >