[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333159#comment-15333159
 ] 

Hadoop QA commented on HADOOP-13149:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project-dist in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810990/HADOOP-13149.branch-2.01.patch
 |
| JIRA Issue | HADOOP-13149 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ea7c9ce85832 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / d1c475d |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9791/testReport/ |
| modules | C: hadoop-project-dist U: hadoop-project-dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9791/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333143#comment-15333143
 ] 

Akira AJISAKA commented on HADOOP-13149:


Thanks [~cnauroth] for updating the jira. Now I don't have Windows environment, 
so it takes some time to prepare the environment using MSDN license.

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333139#comment-15333139
 ] 

Chris Nauroth commented on HADOOP-13149:


[~ajisakaa], were you able to validate building the distro on Windows for 
branch-2 and branch-2.8 with this patch?  If so, then +1 pending pre-commit.  I 
have reopened the issue and clicked submit patch to get a pre-commit run on the 
branch-2 patch.

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13245) Fix up some misc create-release issues

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333138#comment-15333138
 ] 

Akira AJISAKA commented on HADOOP-13245:


HADOOP-12892 was backported to branch-2 and branch-2.8, so this commit should 
be backported as well. What do you think, [~andrew.wang]?

> Fix up some misc create-release issues
> --
>
> Key: HADOOP-13245
> URL: https://issues.apache.org/jira/browse/HADOOP-13245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13245.00.patch, HADOOP-13245.01.patch, 
> HADOOP-13245.02.patch, HADOOP-13245.03.patch, HADOOP-13245.04.patch
>
>
> 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This 
> needs to get added to the Dockerfile
> 2. Missing -Pdocs so that documentation build is complete



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13149:
---
Status: Patch Available  (was: Reopened)

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13149:
---
Target Version/s: 2.8.0  (was: 3.0.0-alpha1)
   Fix Version/s: (was: 3.0.0-alpha1)

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reopened HADOOP-13149:


> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13149:
---
Attachment: HADOOP-13149.branch-2.01.patch

Attaching a patch for the backport.

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333116#comment-15333116
 ] 

Akira AJISAKA commented on HADOOP-13149:


HADOOP-12892 was backported to branch-2 and branch-2.8, so this commit should 
be backported as well. What do you think, [~cnauroth]?

> Windows distro build fails on dist-copynativelibs.
> --
>
> Key: HADOOP-13149
> URL: https://issues.apache.org/jira/browse/HADOOP-13149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13149.001.patch
>
>
> HADOOP-12892 pulled the dist-copynativelibs script into an external file.  
> The call to this script is failing when running a distro build on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333108#comment-15333108
 ] 

Akira AJISAKA commented on HADOOP-12892:


After backporting this issue, we need to backport HDFS-10353, HADOOP-13149, and 
HADOOP-13245 as well. Let's do this.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.branch-2.8.patch, 
> HADOOP-12892.01.branch-2.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.branch-2.patch, HADOOP-12892.02.patch, 
> HADOOP-12892.03.branch-2.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12892) fix/rewrite create-release

2016-06-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12892:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and branch-2.8. Thanks [~leftnoteasy] and [~andrew.wang] 
for review!

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.branch-2.8.patch, 
> HADOOP-12892.01.branch-2.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.branch-2.patch, HADOOP-12892.02.patch, 
> HADOOP-12892.03.branch-2.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12893:
---
Fix Version/s: (was: 2.8.0)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333091#comment-15333091
 ] 

Akira AJISAKA commented on HADOOP-12893:


Committed to branch-2.7 and branch-2.7.3. Thanks to all who contributed to this 
issue.

bq. Which is, frankly, weird.
Agreed. If we need to fix it, please file a separate jira.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12893:
---
   Resolution: Fixed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13114) DistCp should have option to compress data on write

2016-06-15 Thread Suraj Nayak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333079#comment-15333079
 ] 

Suraj Nayak commented on HADOOP-13114:
--

[~raviprak] : Any improvements/suggestions/review on this patch ?

> DistCp should have option to compress data on write
> ---
>
> Key: HADOOP-13114
> URL: https://issues.apache.org/jira/browse/HADOOP-13114
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Suraj Nayak
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13114-trunk_2016-05-07-1.patch, 
> HADOOP-13114-trunk_2016-05-08-1.patch, HADOOP-13114-trunk_2016-05-10-1.patch, 
> HADOOP-13114-trunk_2016-05-12-1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DistCp utility should have capability to store data in user specified 
> compression format. This avoids one hop of compressing data after transfer. 
> Backup strategies to different cluster also get benefit of saving one IO 
> operation to and from HDFS, thus saving resources, time and effort.
> * Create an option -compressOutput defaulting to 
> {{org.apache.hadoop.io.compress.BZip2Codec}}. 
> * Users will be able to change codec with {{-D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec}}
> * If distcp compression is enabled, suffix the filenames with default codec 
> extension to indicate the file is compressed. Thus users can be aware of what 
> codec was used to compress the data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13260) Broken image links in hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md

2016-06-15 Thread ChandraSekar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333052#comment-15333052
 ] 

ChandraSekar commented on HADOOP-13260:
---

Agreed. This looks like a github rendering issue.

> Broken image links in 
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
> -
>
> Key: HADOOP-13260
> URL: https://issues.apache.org/jira/browse/HADOOP-13260
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: ChandraSekar
>Priority: Trivial
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Broken image links in 
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13260) Broken image links in hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md

2016-06-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333053#comment-15333053
 ] 

ASF GitHub Bot commented on HADOOP-13260:
-

Github user redborian closed the pull request at:

https://github.com/apache/hadoop/pull/98


> Broken image links in 
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
> -
>
> Key: HADOOP-13260
> URL: https://issues.apache.org/jira/browse/HADOOP-13260
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: ChandraSekar
>Priority: Trivial
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Broken image links in 
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

2016-06-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333042#comment-15333042
 ] 

Yiqun Lin edited comment on HADOOP-13264 at 6/16/16 4:12 AM:
-

I have looked the code, there were some other places will not release resources 
associated with stream. Like in {{DFSOutputStream#abort}}, 
{{DFSStripedOutputStream#abort}}. We could make all fixed in this jira. Who can 
assign this to me, I'd like to post a patch for this. In addition, this jira 
seems better to transform to HDFS ranther Hadoop Common.


was (Author: linyiqun):
I have looked the code, there were some other places will not release resources 
associated with stream. Like in {{DFSOutputStream#abort}}, 
{{DFSStripedOutputStream#abort}}. We could make all fixed in this jira. Assign 
this jira to me, post a initial patch for this. In addition, this jira seems 
better to transform to HDFS ranther Hadoop Common.

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> 
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

2016-06-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333042#comment-15333042
 ] 

Yiqun Lin commented on HADOOP-13264:


I have looked the code, there were some other places will not release resources 
associated with stream. Like in {{DFSOutputStream#abort}}, 
{{DFSStripedOutputStream#abort}}. We could make all fixed in this jira. Assign 
this jira to me, post a initial patch for this. In addition, this jira seems 
better to transform to HDFS ranther Hadoop Common.

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> 
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333004#comment-15333004
 ] 

Hadoop QA commented on HADOOP-13280:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810968/HADOOP-13280.000.patch
 |
| JIRA Issue | HADOOP-13280 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 61b739ca3148 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5dfc38f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9790/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9790/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9790/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9790/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> 

[jira] [Commented] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

2016-06-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332987#comment-15332987
 ] 

Yiqun Lin commented on HADOOP-13264:


I think this problem is different with HDFS-9812. HDFS-9812 solved the problem 
that {{datastreamer}} thread not closed when failures happened in flushing 
data, and these logic was done in {{closeImpl}}. In this probloem, if the 
method {{closeImpl}} threw the IOException, the dfsClient.endFileLease(fileId) 
will not be called. If we want to fix this, I suggest that we would be better  
to keep the synchronized block code, like this:

{code}
  public void close() throws IOException {
boolean threwException = false;
synchronized (this) {
  try (TraceScope ignored =
  dfsClient.newPathTraceScope("DFSOutputStream#close", src)) {
closeImpl();
  } catch (IOException ioe) {
threwException = true;
  }
}
dfsClient.endFileLease(fileId);
if (threwException) {
  throw new IOException("Exception happened in closing the output stream.");
}
  }
{code}

Correct me if I am wrong, thanks.

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> 
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13272) ViewFileSystem should support storage policy related API

2016-06-15 Thread Peter Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Shi updated HADOOP-13272:
---
Attachment: HADOOP-13272.001.patch

> ViewFileSystem should support storage policy related API
> 
>
> Key: HADOOP-13272
> URL: https://issues.apache.org/jira/browse/HADOOP-13272
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, viewfs
>Reporter: Peter Shi
> Attachments: HADOOP-13272.001.patch
>
>
> Current {{ViewFileSystem}} does not support storage policy related API, it 
> will throw {{UnsupportedOperationException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Attachment: HADOOP-13280.000.patch

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Attachment: (was: HADOOP-13280.000.patch)

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Attachment: HADOOP-13280.000.patch

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Status: Patch Available  (was: Open)

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280.000.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-15 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13280:
--

 Summary: FileSystemStorageStatistics#getLong(“readOps“) should 
return readOps + largeReadOps
 Key: HADOOP-13280
 URL: https://issues.apache.org/jira/browse/HADOOP-13280
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Currently {{FileSystemStorageStatistics}} instance simply returns data from 
{{FileSystem$Statistics}}. As to {{readOps}}, the 
{{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
sum as well.

Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332793#comment-15332793
 ] 

Ravi Prakash commented on HADOOP-3733:
--

I reviewed the patch. Thanks a lot Steve. It looks great! Nits:
1. {code}authority and scheme are not case sensitive{code} authority is case 
sensitive, isn't it?
2. In general checkPath is a little hard for me to understand. Could you please 
explain what you are checking in the javadoc?

After these 3 issues (decoding and these 2)  are addressed, I'm +1.

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332748#comment-15332748
 ] 

Hadoop QA commented on HADOOP-13255:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
2s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810928/HADOOP-13255.05.patch 
|
| JIRA Issue | HADOOP-13255 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dac1aef6864e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f0aa75 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9789/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9789/testReport/ |
| modules | C: hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-common 

[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332693#comment-15332693
 ] 

Anu Engineer commented on HADOOP-12291:
---

[~jnp] I think it is due to the fact that branch-2.8 is missing HADOOP-12782. 
if we commit that JIRA, this one should be able to go in without conflicts.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.9.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-12291:
--
Fix Version/s: (was: 2.8.0)
   2.9.0

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.9.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-12291:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.9.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332679#comment-15332679
 ] 

Jitendra Nath Pandey commented on HADOOP-12291:
---

I am resolving as fixed for 2.9. If it is a must have for 2.8, please re-open. 
cc [~vinodkv]

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13279) Fix all Bad Practices flagged in Fortify

2016-06-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved HADOOP-13279.
---
Resolution: Duplicate

> Fix all Bad Practices flagged in Fortify
> 
>
> Key: HADOOP-13279
> URL: https://issues.apache.org/jira/browse/HADOOP-13279
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Portability Flaw: Locale Dependent Comparison TimelineWebServices.java:421
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> Code Correctness: Double-Checked Locking  ResourceHandlerModule.java:142
> Code Correctness: Double-Checked Locking  RMPolicyProvider.java:51
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13279) Fix all Bad Practices flagged in Fortify

2016-06-15 Thread Yufei Gu (JIRA)
Yufei Gu created HADOOP-13279:
-

 Summary: Fix all Bad Practices flagged in Fortify
 Key: HADOOP-13279
 URL: https://issues.apache.org/jira/browse/HADOOP-13279
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Yufei Gu
Assignee: Yufei Gu


The following code contain potential problems:
{code}
Unreleased Resource: StreamsTopCLI.java:738
Unreleased Resource: StreamsGraph.java:189
Unreleased Resource: StreamsCgroupsLCEResourcesHandler.java:291
Unreleased Resource: StreamsUnmanagedAMLauncher.java:195
Unreleased Resource: StreamsCGroupsHandlerImpl.java:319
Unreleased Resource: StreamsTrafficController.java:629
Portability Flaw: Locale Dependent Comparison   TimelineWebServices.java:421
Null DereferenceApplicationImpl.java:465
Null DereferenceVisualizeStateMachine.java:52
Null DereferenceContainerImpl.java:1089
Null DereferenceQueueManager.java:219
Null DereferenceQueueManager.java:232
Null DereferenceResourceLocalizationService.java:1016
Null DereferenceResourceLocalizationService.java:1023
Null DereferenceResourceLocalizationService.java:1040
Null DereferenceResourceLocalizationService.java:1052
Null DereferenceProcfsBasedProcessTree.java:802
Null DereferenceTimelineClientImpl.java:639
Null DereferenceLocalizedResource.java:206
Code Correctness: Double-Checked LockingResourceHandlerModule.java:142
Code Correctness: Double-Checked LockingRMPolicyProvider.java:51
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332672#comment-15332672
 ] 

Jitendra Nath Pandey commented on HADOOP-12291:
---

I have committed this to branch-2 as well. However the patch doesn't apply to 
branch-2.8. There are other patches in this context that are pre-requisites for 
this to apply cleanly in branch-2.8. I am inclined to leave it as fixed in 2.9 
only. 

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332673#comment-15332673
 ] 

Daniel Templeton commented on HADOOP-13254:
---

Thanks, [~yufei].  A few more comments:

* Would you mind adding some comments to 
{{DiskValidatorFactory.getInstance(Class)}} that explains why you're checking 
the result of the put and using it if it's not null?  I don't think it's 
obvious that it's because it could be used by multiple threads.
* This assert message:

{code}
  assertTrue("checkDir success", success);
{code}

still needs to be clearer.  Something like, "call to checkDir() succeeded even 
though it was expected to fail"
* In {{TestBasicDiskValidator.checkDirs()}}, I think the try for the 
try-finally should start a bit earlier so that it encompasses all possible 
unexpected exit points.
* In {{TestBasicDiskValidator.checkDirs()}}, this code:
{code}
File localDir = File.createTempFile("test", "tmp");
if (isDir) {
  localDir.delete();
  localDir.mkdir();
}
{code}
doesn't make any sense to me.  Shouldn't you test if it should be a dir *first* 
instead of deleting the file and creating a dir if that's what's needed?
* In {{TestDiskValidatorFactory.testGetInstance()}}, at the end you try to get 
a bad instance, but you don't do anything with the result.  If that's on 
purpose, you should at least document it.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332658#comment-15332658
 ] 

Hadoop QA commented on HADOOP-13254:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810924/HADOOP-13254.004.patch
 |
| JIRA Issue | HADOOP-13254 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 87b5b26630d1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f0aa75 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9788/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9788/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-06-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332668#comment-15332668
 ] 

Chris Nauroth commented on HADOOP-13278:


bq. This is my first ticket/pull request against Hadoop, so let me know if I'm 
not following some convention properly 

[~apetresc], thank you very much for participating in Apache Hadoop!.  Please 
see our [HowToContribute|https://wiki.apache.org/hadoop/HowToContribute] wiki 
page for more details on how the contribution process works.  If you're 
interested in working on S3A, then also please pay particular attention to the 
section on [submitting patches against object 
store|https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure].
  That section discusses our requirements for integration testing of patches 
against the back-end services (S3, Azure Storage, etc.).

I understand the motivation for the proposed change, but I have to vote -1, 
because it would violate the semantics required of a Hadoop-compatible file 
system.  The patch would allow a directory to be created as a descendant of a 
file, which works against expectations of applications in the Hadoop ecosystem. 
 More concretely, the patch causes a test failure in 
{{TestS3AContractMkdir#testMkdirOverParentFile}}, which tests for exactly this 
condition.  (See below.)

However, you might be interested to know that there is a lot of other work in 
progress on hardening and optimizing S3A.  This is tracked in issues 
HADOOP-11694 and HADOOP-13204, and their sub-tasks.

{code}
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 11.502 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
testMkdirOverParentFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir) 
 Time elapsed: 1.924 sec  <<< FAILURE!
java.lang.AssertionError: mkdirs did not fail over a file but returned true; ls 
s3a://cnauroth-test-aws-s3a/test/testMkdirOverParentFile[00] 
S3AFileStatus{path=s3a://cnauroth-test-aws-s3a/test/testMkdirOverParentFile; 
isDirectory=false; length=1024; replication=1; blocksize=33554432; 
modification_time=1466026655000; access_time=0; owner=; group=; 
permission=rw-rw-rw-; isSymlink=false} isEmptyDirectory=false

at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkdirOverParentFile(AbstractContractMkdirTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


> S3AFileSystem mkdirs does not need to validate parent path components
> -
>
> Key: HADOOP-13278
> URL: https://issues.apache.org/jira/browse/HADOOP-13278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Adrian Petrescu
>Priority: Minor
>
> According to S3 semantics, there is no conflict if a bucket contains a key 
> named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
> after all, nothing but prefixes.
> However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
> traverse every parent path component for the directory it's trying to create, 
> making sure there's no file with that name. This is suboptimal for three main 
> reasons:
>  * Wasted API calls, since the client is getting metadata for each path 
> component 
>  * This can cause *major* problems with buckets whose permissions are being 
> managed by IAM, where access may not be granted to the root bucket, but only 
> to some prefix. When you call {{mkdirs}}, even on a prefix that you have 
> access to, the traversal up the path will cause you to eventually hit the 
> root bucket, which will fail with a 403 - even though the directory creation 
> call would have succeeded.
>  * Some 

[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332648#comment-15332648
 ] 

Andrew Wang commented on HADOOP-12892:
--

Forgot to mention, I looked at the shellcheck/shell doc errors too, these exist 
in the trunk version too. I'm not sure why they weren't picked up by the 
original precommit run.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.branch-2.8.patch, 
> HADOOP-12892.01.branch-2.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.branch-2.patch, HADOOP-12892.02.patch, 
> HADOOP-12892.03.branch-2.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332646#comment-15332646
 ] 

Andrew Wang commented on HADOOP-12892:
--

Took a quick look, LGTM thanks Akira, also thanks Wangda for validating.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.branch-2.8.patch, 
> HADOOP-12892.01.branch-2.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.branch-2.patch, HADOOP-12892.02.patch, 
> HADOOP-12892.03.branch-2.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-06-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332628#comment-15332628
 ] 

ASF GitHub Bot commented on HADOOP-13278:
-

GitHub user apetresc opened a pull request:

https://github.com/apache/hadoop/pull/100

HADOOP-13278. S3AFileSystem mkdirs does not need to validate parent path 
components

According to S3 semantics, there is no conflict if a bucket contains a key 
named `a/b` and also a directory named `a/b/c`. "Directories" in S3 are, after 
all, nothing but prefixes.

However, the `mkdirs` call in `S3AFileSystem` does go out of its way to 
traverse every parent path component for the directory it's trying to create, 
making sure there's no file with that name. This is suboptimal for three main 
reasons:

 * Wasted API calls, since the client is getting metadata for each path 
component 
 * This can cause *major* problems with buckets whose permissions are being 
managed by IAM, where access may not be granted to the root bucket, but only to 
some prefix. When you call `mkdirs`, even on a prefix that you have access to, 
the traversal up the path will cause you to eventually hit the root bucket, 
which will fail with a 403 - even though the directory creation call would have 
succeeded.
 * Some people might actually have a file that matches some other file's 
prefix... I can't see why they would want to do that, but it's not against S3's 
rules.

[I've opened a ticket](https://issues.apache.org/jira/browse/HADOOP-13278) 
on the Hadoop JIRA. This  pull request is a simple patch that just removes this 
portion of the check. I have tested it with my team's instance of Spark + 
Luigi, and can confirm it works, and resolves the aforementioned permissions 
issue for a bucket on which we only had prefix access.

This is my first ticket/pull request against Hadoop, so let me know if I'm 
not following some convention properly :)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rubikloud/hadoop s3a-root-path-components

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/100.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #100


commit 8a28062d34e5f0c0b83a9577dc9d818bab58c269
Author: Adrian Petrescu 
Date:   2016-06-15T14:15:21Z

No need to check parent path components when creating a directory.

Given S3 semantics, there's actually no problem with having a/b/c be a 
prefix even if
a/b or a is already a file. So there's no need to check for it - it wastes 
API calls
and can lead to problems with access control if the caller only has 
permissions
starting at some prefix.




> S3AFileSystem mkdirs does not need to validate parent path components
> -
>
> Key: HADOOP-13278
> URL: https://issues.apache.org/jira/browse/HADOOP-13278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Adrian Petrescu
>Priority: Minor
>
> According to S3 semantics, there is no conflict if a bucket contains a key 
> named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
> after all, nothing but prefixes.
> However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
> traverse every parent path component for the directory it's trying to create, 
> making sure there's no file with that name. This is suboptimal for three main 
> reasons:
>  * Wasted API calls, since the client is getting metadata for each path 
> component 
>  * This can cause *major* problems with buckets whose permissions are being 
> managed by IAM, where access may not be granted to the root bucket, but only 
> to some prefix. When you call {{mkdirs}}, even on a prefix that you have 
> access to, the traversal up the path will cause you to eventually hit the 
> root bucket, which will fail with a 403 - even though the directory creation 
> call would have succeeded.
>  * Some people might actually have a file that matches some other file's 
> prefix... I can't see why they would want to do that, but it's not against 
> S3's rules.
> I've opened a pull request with a simple patch that just removes this portion 
> of the check. I have tested it with my team's instance of Spark + Luigi, and 
> can confirm it works, and resolves the aforementioned permissions issue for a 
> bucket on which we only had prefix access.
> This is my first ticket/pull request against Hadoop, so let me know if I'm 
> not following some convention properly :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-15 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332608#comment-15332608
 ] 

Ming Ma commented on HADOOP-13189:
--

Sounds good. [~shv] please go ahead and thank you.

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-06-15 Thread Adrian Petrescu (JIRA)
Adrian Petrescu created HADOOP-13278:


 Summary: S3AFileSystem mkdirs does not need to validate parent 
path components
 Key: HADOOP-13278
 URL: https://issues.apache.org/jira/browse/HADOOP-13278
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Adrian Petrescu
Priority: Minor


According to S3 semantics, there is no conflict if a bucket contains a key 
named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
after all, nothing but prefixes.

However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
traverse every parent path component for the directory it's trying to create, 
making sure there's no file with that name. This is suboptimal for three main 
reasons:

 * Wasted API calls, since the client is getting metadata for each path 
component 
 * This can cause *major* problems with buckets whose permissions are being 
managed by IAM, where access may not be granted to the root bucket, but only to 
some prefix. When you call {{mkdirs}}, even on a prefix that you have access 
to, the traversal up the path will cause you to eventually hit the root bucket, 
which will fail with a 403 - even though the directory creation call would have 
succeeded.
 * Some people might actually have a file that matches some other file's 
prefix... I can't see why they would want to do that, but it's not against S3's 
rules.

I've opened a pull request with a simple patch that just removes this portion 
of the check. I have tested it with my team's instance of Spark + Luigi, and 
can confirm it works, and resolves the aforementioned permissions issue for a 
bucket on which we only had prefix access.

This is my first ticket/pull request against Hadoop, so let me know if I'm not 
following some convention properly :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332594#comment-15332594
 ] 

Arpit Agarwal commented on HADOOP-13263:


Yes they can be private members of the Groups class, exposed via a getter. If 
we are adding "Failed with an exception", we can also add "Total succeeded" so 
administrators can estimate what fraction of calls failed.

It would be perfectly fine to add the counters in a separate Jira since your 
background reload changes are valuable on their own.

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13255:
---
Attachment: HADOOP-13255.05.patch

The test failure was due to cross-testcase clean up. Patch 5 should pass.

[~xyao] / [~zhz], could you take a look and share your thoughts? I think this 
patch is correct, and personally wants to let the fix to be more generic. 
Thanks again!

> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, 
> HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, 
> HADOOP-13255.test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-15 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332575#comment-15332575
 ] 

Stephen O'Donnell commented on HADOOP-13263:


Thanks for the review. I have implemented 1 - 4, plus added a couple of further 
tests I used to figure things out during our earlier discussion.

For the counters, should these just be public int on the group class? It 
probably makes sense to have 3:

1. Pending
2. In Progress
3. Failed with an exception

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HADOOP-13254:
--
Status: Patch Available  (was: Open)

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332555#comment-15332555
 ] 

Yufei Gu commented on HADOOP-13254:
---

[~templedf], thanks a lot for the detailed review. I uploaded patch 004 for all 
the comments.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HADOOP-13254:
--
Attachment: HADOOP-13254.004.patch

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated HADOOP-13254:
--
Status: Open  (was: Patch Available)

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13277) Need To Support IAM role based access for supporting Amazon S3

2016-06-15 Thread subbu srinivasan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332548#comment-15332548
 ] 

subbu srinivasan commented on HADOOP-13277:
---

Hi Chris,
Yes, Chris. Using the setting of 
com.amazonaws.auth.DefaultAWSCredentialsProviderChain for 
fs.s3a.aws.credentials.provider should solve the problem. 

I added this  to core-site.xml


  fs.s3a.aws.credentials.provider
  com.amazonaws.auth.DefaultAWSCredentialsProviderChain



Useful to add it to documentation.

> Need To Support IAM role based access for supporting Amazon S3
> --
>
> Key: HADOOP-13277
> URL: https://issues.apache.org/jira/browse/HADOOP-13277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: subbu srinivasan
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We need amazon secret accessid/credentials as part of the core-site.xml.
> This is not ideal in many deployments, we would use IAM roles to accomplish
> access to s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-15 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332540#comment-15332540
 ] 

Konstantin Shvachko commented on HADOOP-13189:
--

What do you think?

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-15 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332537#comment-15332537
 ] 

Konstantin Shvachko commented on HADOOP-13189:
--

_I think it is a bug fix._ Current meaning of call queue size for FairCallQueue 
contradicts documentation.
The refreshCallQueue is a good example. You had a standard queue of size 1000. 
Then you switch to FairCallQueue using refreshCallQueue command, and suddenly 
increase the queue size to 4000. I don't think this is expected. It is 
definitely not documented.

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch, HADOOP-13189.002.patch, 
> HADOOP-13189.003.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332517#comment-15332517
 ] 

Ravi Prakash commented on HADOOP-3733:
--

Found it. In {{S3xLoginHelper.extractLoginDetails}} we should just do this: 
{code}password = java.net.URLDecoder.decode(login.substring(loginSplit + 1), 
"UTF-8");{code}


> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Open  (was: Patch Available)

the failure tests are real regressions for a change

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332449#comment-15332449
 ] 

Steve Loughran commented on HADOOP-13207:
-

FWIW, I'd seen that new test, {{testComplexDirActions()}} fail against s3a, but 
assumed that I'd done something very wrong there. Looks like its the test 
that's at fault.

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332422#comment-15332422
 ] 

Hadoop QA commented on HADOOP-13207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
47s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} root: The patch generated 0 new + 29 unchanged - 51 
fixed = 29 total (was 80) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 39s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus |
|   | hadoop.fs.contract.rawlocal.TestRawlocalContractGetFileStatus |
| JDK v1.8.0_91 Timed out junit tests | 

[jira] [Updated] (HADOOP-5353) add progress callback feature to the slow FileUtil operations with ability to cancel the work

2016-06-15 Thread Pranav Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranav Prakash updated HADOOP-5353:
---
Attachment: HADOOP-5353.002.patch

Thank you for the code review Steve! 

I've uploaded a revised version of the patch based on your feedback and cleaned 
up the duplicate code between the copy methods that take a progress handle and 
the ones that don’t.

> add progress callback feature to the slow FileUtil operations with ability to 
> cancel the work
> -
>
> Key: HADOOP-5353
> URL: https://issues.apache.org/jira/browse/HADOOP-5353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Pranav Prakash
>Priority: Minor
> Attachments: HADOOP-5353.000.patch, HADOOP-5353.001.patch, 
> HADOOP-5353.002.patch
>
>
> This is something only of relevance of people doing front ends to FS 
> operations, and as they could take the code in FSUtil and add something with 
> this feature, its a blocker to none of them. 
> Current FileUtil.copy can take a long time to move large files around, but 
> there is no progress indicator to GUIs, or a way to cancel the operation 
> mid-way, j interrupting the thread or closing the filesystem.
> I propose a FileIOProgress interface to the copy ops, one that had a single 
> method to notify listeners of bytes read and written, and the number of files 
> handled.
> {code}
> interface FileIOProgress {
>  boolean progress(int files, long bytesRead, long bytesWritten);
> }
> The return value would be true to continue the operation, or false to stop 
> the copy and leave the FS in whatever incomplete state it is in currently. 
> it could even be fancier: have  beginFileOperation and endFileOperation 
> callbacks to pass in the name of the current file being worked on, though I 
> don't have a personal need for that.
> GUIs could show progress bars and cancel buttons, other tools could use the 
> interface to pass any cancellation notice upstream.
> The FileUtil.copy operations would call this interface (blocking) after every 
> block copy, so the frequency of invocation would depend on block size and 
> network/disk speeds. Which is also why I don't propose having any percentage 
> done indicators; it's too hard to predict percentage of time done for 
> distributed file IO with any degree of accuracy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13277) Need To Support IAM role based access for supporting Amazon S3

2016-06-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-13277.

Resolution: Duplicate

Hello [~ssriniva...@gmail.com].  The S3A file system implementation already 
provides a lot of flexibility in the authentication options, including IAM 
role-based authentication.  For more details, see issues HADOOP-10400, 
HADOOP-12537, HADOOP-12723 and HADOOP-12807.

If this issue refers to the legacy S3 and S3N file system instead of S3A, then 
it's unlikely that changes would be made to those.  The investment is going 
into S3A at this point.

I'm going to resolve this issue as a duplicate, but please feel free to reopen 
if I misunderstood something.

> Need To Support IAM role based access for supporting Amazon S3
> --
>
> Key: HADOOP-13277
> URL: https://issues.apache.org/jira/browse/HADOOP-13277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: subbu srinivasan
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We need amazon secret accessid/credentials as part of the core-site.xml.
> This is not ideal in many deployments, we would use IAM roles to accomplish
> access to s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332360#comment-15332360
 ] 

Hudson commented on HADOOP-12291:
-

ABORTED: Integrated in Hadoop-trunk-Commit #9963 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9963/])
HADOOP-12291. Add support for nested groups in LdapGroupsMapping. (jitendra: 
rev 6f0aa75121224589fe1e20630c597f851ef3bed2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMappingWithPosixGroup.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMappingBase.java


> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13277) Need To Support IAM role based access for supporting Amazon S3

2016-06-15 Thread subbu srinivasan (JIRA)
subbu srinivasan created HADOOP-13277:
-

 Summary: Need To Support IAM role based access for supporting 
Amazon S3
 Key: HADOOP-13277
 URL: https://issues.apache.org/jira/browse/HADOOP-13277
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: subbu srinivasan


We need amazon secret accessid/credentials as part of the core-site.xml.
This is not ideal in many deployments, we would use IAM roles to accomplish
access to s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332302#comment-15332302
 ] 

Jitendra Nath Pandey commented on HADOOP-12291:
---

I have committed this to trunk. Thanks for the contribution, [~ekundin].
Keeping the jira open until committed to branch-2 and branch-2.8.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332289#comment-15332289
 ] 

Hadoop QA commented on HADOOP-3733:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810891/HADOOP-3733-branch-2-006.patch
 |
| JIRA Issue | HADOOP-3733 |
| Optional Tests |  asflicense  findbugs  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  checkstyle  |
| uname | Linux 73bbaabdd6b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332273#comment-15332273
 ] 

Ravi Prakash commented on HADOOP-3733:
--

Thanks Steve! With patch v6, I am able to use aws secrets without slashes. With 
un-encoded slashes I see this
{code}
java.lang.NullPointerException: null uri host. This can be caused by unencoded 
/ in the password string
at java.util.Objects.requireNonNull(Objects.java:228)
at 
org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:60)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
{code}

But with encoded slashes, I still can't do an {{ls}} successfully

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Patch Available  (was: Open)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Attachment: HADOOP-13207-branch-2-006.patch

Patch 006: javadoc/checkstyle

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Open  (was: Patch Available)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Patch Available  (was: Open)

Patch 006; tested against S3 ireland. I haven't tested inline secrets in s3, 
s3n, or on the command line; just in a unit test that is set up to do this (and 
goes to effort to not log the details on a failure: the first time I've written 
a unit test to be deliberately useless when reporting failures)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Attachment: HADOOP-3733-branch-2-006.patch

patch branch-2-006. Add a couple of lines in the documentation telling people 
not to put secrets in their URLs

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-15 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332186#comment-15332186
 ] 

Jitendra Nath Pandey commented on HADOOP-12291:
---

+1 for the latest patch. I will commit it shortly.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch, HADOOP-12291.008.patch, 
> HADOOP-12291.009.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Attachment: HADOOP-3733-branch-2-005.patch

Patch 0005
* all s3 filesystems warn that you shouldn't be putting secrets in your URLs
* and s3n/s3 don't mention the technique in their error messages
* javadocs
* fix findbugs warnings, one through a fix, one through commenting it out. 
(it's essentially the same code copied and pasted from Filesystem; it's 
disabled there too).

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Open  (was: Patch Available)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332128#comment-15332128
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

>From my quick pass over the netty source, it looks like they include the 
>license files of their optional components in their jar.  So, not just their 
>actual/bundled dependencies. Which is, frankly, weird. 

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332105#comment-15332105
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

bq. META-INF/license/LICENSE.jboss-logging.txt

Oh, this problematic.  JBoss is LGPL 2.1.  Strictly forbidden.





> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332105#comment-15332105
 ] 

Allen Wittenauer edited comment on HADOOP-12893 at 6/15/16 5:05 PM:


bq. META-INF/license/LICENSE.jboss-logging.txt

Oh, this is problematic.  JBoss is LGPL 2.1.  Strictly forbidden.






was (Author: aw):
bq. META-INF/license/LICENSE.jboss-logging.txt

Oh, this problematic.  JBoss is LGPL 2.1.  Strictly forbidden.





> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13126) Add Brotli compression codec

2016-06-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13126:
---
Comment: was deleted

(was: Applied the patch and created binary distro. It includes 
jbrotli-native-linux-x86-amd64-0.5.0.jar and libbrotli.so is included in the 
jar file, so I'm thinking we should add the following to NOTICE.txt.
{noformat}
This product optionally depends on 'brotli', a compression
and decompression library, which can be obtained at:

  * LICENSE:
* license/LICENSE.brotli.txt (MIT License)
  * HOMEPAGE:
* https://github.com/google/brotli
{noformat})

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch, HADOOP-13126.2.patch, 
> HADOOP-13126.3.patch, HADOOP-13126.4.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332076#comment-15332076
 ] 

Akira AJISAKA commented on HADOOP-12893:


I got it. Thanks a lot!

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332071#comment-15332071
 ] 

Xiao Chen commented on HADOOP-12893:


Hi [~ajisakaa],
I believe that is from within the netty notice included in Gson. It's referring 
to the files inside the netty jar:
{noformat}
META-INF/license/LICENSE.base64.txt
META-INF/license/LICENSE.commons-logging.txt
META-INF/license/LICENSE.felix.txt
META-INF/license/LICENSE.jboss-logging.txt
META-INF/license/LICENSE.jsr166y.txt
META-INF/license/LICENSE.jzlib.txt
META-INF/license/LICENSE.log4j.txt
META-INF/license/LICENSE.protobuf.txt
META-INF/license/LICENSE.slf4j.txt
META-INF/license/LICENSE.webbit.txt
{noformat}
So IMHO we can leave it as-is.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.8.0, 2.6.5
>
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, 
> HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13259) mvn fs/s3 test runs to set DNS TTL to 20s

2016-06-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15332027#comment-15332027
 ] 

Chris Nauroth commented on HADOOP-13259:


[~ste...@apache.org], do you think we could achieve what you're describing by 
setting the system properties related to DNS TTL in the Surefire 
configuration's {{}}?  That would propagate down to 
the forked JVM processes that run the JUnit tests.

> mvn fs/s3 test runs to set DNS TTL to 20s
> -
>
> Key: HADOOP-13259
> URL: https://issues.apache.org/jira/browse/HADOOP-13259
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> S3 is faster and more resilient to failure if the DNS load balancing is 
> queried regularly. This should be done in testing both for performance and to 
> verify that frequent DNS refresh doesn't break things.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

2016-06-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331932#comment-15331932
 ] 

Kihwal Lee commented on HADOOP-13264:
-

Since it is closely related to HDFS-9812, [~linyiqun], can you take a look at 
this?  

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> 
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331856#comment-15331856
 ] 

Hadoop QA commented on HADOOP-3733:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 3 new + 
99 unchanged - 0 fixed = 102 total (was 99) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-tools/hadoop-aws generated 3 new + 0 unchanged 
- 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, 
Path, int)   At S3xLoginHelper.java:== or != in 
org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, 
Path, int)   At S3xLoginHelper.java:[line 162] |
|  |  Null passed for non-null parameter of toString(URI) in 
org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, 
Path, int)  Method invoked at S3xLoginHelper.java:of 

[jira] [Commented] (HADOOP-13273) start-build-env.sh fails

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331789#comment-15331789
 ] 

Hadoop QA commented on HADOOP-13273:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-13273 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810787/HADOOP-13273.001.patch
 |
| JIRA Issue | HADOOP-13273 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9784/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> start-build-env.sh fails
> 
>
> Key: HADOOP-13273
> URL: https://issues.apache.org/jira/browse/HADOOP-13273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: OS X EI Capitan 10.11.5
>Reporter: Denis Bolshakov
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13273.001.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> running start-build-env.sh on Mac fails on execution
> RUN apt-get install -y software-properties-common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Attachment: HADOOP-3733-branch-2-004.patch

patch branch-2-004; fixes checkstyle.

Chris: thanks for the +1; I'm waiting for Ravi to do another attempt at trying 
to get this to work. 

FWIW, I don' think people should be trying to use credentials on the CLI. This 
patch tries to strip it from the URL and path, but they do creep out in error 
messages and stack traces.

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Patch Available  (was: Open)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Open  (was: Patch Available)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Open  (was: Patch Available)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Status: Patch Available  (was: Open)

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.2-alpha, 0.17.1
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-3733:
---
Attachment: HADOOP-3733-branch-2-003.patch

Patch 003

* fixes canonicalization so that there shouldn't be errors if path checking now 
that auth details are being stripped out of fsUri
* tests this 
* I've not been able to replicate the checkpath/canonicalization problem which 
was reported to me by Ravi; he'll have to test himself.
* special message for the case where getHost()==null, getAuthority!=null; this 
situation arises if there is an unencoded / in the path:

{code}
-ls: Fatal internal error
java.lang.NullPointerException: null uri host. This can be caused by unencoded 
/ in the password string
at java.util.Objects.requireNonNull(Objects.java:228)
at 
org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:53)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
  {code}
  

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733.patch, hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl

2016-06-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331733#comment-15331733
 ] 

Allen Wittenauer commented on HADOOP-13225:
---

I was going through some old email today and ended up thinking about this JIRA 
this morning.  

A few months ago, someone asked me how easy it would be to run the Hadoop 
daemons in a cgroup with 3.x.  I mentioned that it would be trivial: just 
replace the java execution functions.

One of the key centerpieces of the shell script rewrite was the ability to 
replace functions.  This means that if an end user doesn't like how we do 
something, they can replace it with their own. 

This JIRA is a *great* example of that in action.  One user wants numactl and 
another wants cgexec.  We really can't support both without making a lot of "if 
this then this, otherwise, if this then this, and finally if this then this." 
type decisions in the shell code.

What we really should be doing here is providing this as an additional example 
in hadoop-user-functions.sh.example.  It's a really good one because it's 
trivial to write (at least for the non-secure case), useful for a sub-class of 
users, and a great template for other sub-classes of users to do their own 
logic (e.g., pfexec on Solaris).

With that said, I'll volunteer to write it up if another committer is actually 
willing to review it.

> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13275) hadoop fs command path doesn't include translation of amazon client exceptions

2016-06-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13275.
-
Resolution: Invalid

it is being translated; this is just the localised message being printed.

> hadoop fs command path doesn't include translation of amazon client exceptions
> --
>
> Key: HADOOP-13275
> URL: https://issues.apache.org/jira/browse/HADOOP-13275
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> If you try to do unauthed write operations to an s3a repo, the failure can 
> surface without the {{AmazonClientException}} being translated
> {code}
> bin/hadoop fs -D 
> fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider
>  -rm s3a://landsat-pds/scene_list.gz
> 16/06/15 14:03:32 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/06/15 14:03:35 INFO Configuration.deprecation: io.bytes.per.checksum is 
> deprecated. Instead, use dfs.bytes-per-checksum
> rm: s3a://landsat-pds/scene_list.gz: delete on 
> s3a://landsat-pds/scene_list.gz: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
> Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
> 0F2954E98D193227), S3 Extended Request ID: 
> hpGyx9Snazi71vqxJcsLTr054aUO3+wu9fBEGgjbx0y41nMF6Xj5oyA+P9/0G6A6H93BsOtrDuM=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13276) S3a operations keep retrying if the password is wrong

2016-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331724#comment-15331724
 ] 

Steve Loughran commented on HADOOP-13276:
-

final output is original exception
{code}
ls: : getFileStatus on : com.amazonaws.services.s3.model.AmazonS3Exception: The 
request signature we calculated does not match the signature you provided. 
Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error 
Code: SignatureDoesNotMatch; Request ID: 756C67505DF05C0F), S3 Extended Request 
ID: ZMzPOdq8K1FeTDtSKVU0p+FotFU+EmCvnko8tH5n00hCj71ZUq/5ffn0NP7LWz7WZI1tVsDnFos=
{code}

Looks like the AWS retry policy considers an auth failure as retryable, and so 
does a repeated retry with exponential backoff. This is probably not the right 
strategy, unless there can be transient signing/signature validation problems

> S3a operations keep retrying if the password is wrong
> -
>
> Key: HADOOP-13276
> URL: https://issues.apache.org/jira/browse/HADOOP-13276
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> If you do a {{hadoop fs}} command with the AWS account valid but the password 
> wrong, it takes a while to timeout, because of retries happening underneath.
> Eventually it gives up, but failing fast would be better.
> # maybe: check the password length and fail if it is not the right length (is 
> there a standard one? Or at least a range?)
> # consider a retry policy which fails faster on signature failures/403 
> responses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13276) S3a operations keep retrying if the password is wrong

2016-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331716#comment-15331716
 ] 

Steve Loughran commented on HADOOP-13276:
-

Stack of failure
{code}
"main" #1 prio=5 os_prio=31 tid=0x7ffab3024800 nid=0x1703 waiting on 
condition [0x70218000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
com.amazonaws.http.AmazonHttpClient.pauseBeforeNextRetry(AmazonHttpClient.java:1248)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:684)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
at 
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:653)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:887)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1459)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:109)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:282)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1678)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:1857)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
{code}

> S3a operations keep retrying if the password is wrong
> -
>
> Key: HADOOP-13276
> URL: https://issues.apache.org/jira/browse/HADOOP-13276
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> If you do a {{hadoop fs}} command with the AWS account valid but the password 
> wrong, it takes a while to timeout, because of retries happening underneath.
> Eventually it gives up, but failing fast would be better.
> # maybe: check the password length and fail if it is not the right length (is 
> there a standard one? Or at least a range?)
> # consider a retry policy which fails faster on signature failures/403 
> responses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13276) S3a operations keep retrying if the password is wrong

2016-06-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13276:
---

 Summary: S3a operations keep retrying if the password is wrong
 Key: HADOOP-13276
 URL: https://issues.apache.org/jira/browse/HADOOP-13276
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Steve Loughran
Priority: Minor


If you do a {{hadoop fs}} command with the AWS account valid but the password 
wrong, it takes a while to timeout, because of retries happening underneath.

Eventually it gives up, but failing fast would be better.

# maybe: check the password length and fail if it is not the right length (is 
there a standard one? Or at least a range?)
# consider a retry policy which fails faster on signature failures/403 responses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13275) hadoop fs command path doesn't include translation of amazon client exceptions

2016-06-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13275:
---

 Summary: hadoop fs command path doesn't include translation of 
amazon client exceptions
 Key: HADOOP-13275
 URL: https://issues.apache.org/jira/browse/HADOOP-13275
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


If you try to do unauthed write operations to an s3a repo, the failure can 
surface without the {{AmazonClientException}} being translated
{code}
bin/hadoop fs -D 
fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider
 -rm s3a://landsat-pds/scene_list.gz
16/06/15 14:03:32 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/06/15 14:03:35 INFO Configuration.deprecation: io.bytes.per.checksum is 
deprecated. Instead, use dfs.bytes-per-checksum
rm: s3a://landsat-pds/scene_list.gz: delete on s3a://landsat-pds/scene_list.gz: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
0F2954E98D193227), S3 Extended Request ID: 
hpGyx9Snazi71vqxJcsLTr054aUO3+wu9fBEGgjbx0y41nMF6Xj5oyA+P9/0G6A6H93BsOtrDuM=
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331664#comment-15331664
 ] 

Hadoop QA commented on HADOOP-12893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} branch-2.7.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} branch-2.7.3 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} branch-2.7.3 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
42s{color} | {color:green} branch-2.7.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.7.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
2s{color} | {color:green} branch-2.7.3 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
13s{color} | {color:green} branch-2.7.3 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-project-dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7753 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  3m 
48s{color} | {color:red} The patch 184 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 37s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
0s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | org.apache.hadoop.conf.TestConfiguration |
| JDK v1.7.0_101 Timed out junit tests | 
org.apache.hadoop.conf.TestConfiguration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c420dfe |
| 

[jira] [Commented] (HADOOP-13274) Filesystem.checkPath should use StringUtils.equalsIgnoreCase for comparisons

2016-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331650#comment-15331650
 ] 

Steve Loughran commented on HADOOP-13274:
-

this is comparing schemas. so SWIFT != swift; 

> Filesystem.checkPath should use StringUtils.equalsIgnoreCase for comparisons
> 
>
> Key: HADOOP-13274
> URL: https://issues.apache.org/jira/browse/HADOOP-13274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> {{Filesystem.checkPath}} compares URI elements using 
> {{String.equalsIgnoreCase()}}, so is brittle against i8n locale changes.
> It should move to {{StringUtils.equalsIgnoreCase}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13274) Filesystem.checkPath should use StringUtils.equalsIgnoreCase for comparisons

2016-06-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331614#comment-15331614
 ] 

Vinayakumar B commented on HADOOP-13274:


Similar issue was discussed in HDFS-8705. But it was found that 
{{String.equalIgnoreCase()}} is sufficient there. 
Morever {{StringUtils.equalsIgnoreCase()}} also does nothing different there.

> Filesystem.checkPath should use StringUtils.equalsIgnoreCase for comparisons
> 
>
> Key: HADOOP-13274
> URL: https://issues.apache.org/jira/browse/HADOOP-13274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> {{Filesystem.checkPath}} compares URI elements using 
> {{String.equalsIgnoreCase()}}, so is brittle against i8n locale changes.
> It should move to {{StringUtils.equalsIgnoreCase}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13274) Filesystem.checkPath should use StringUtils.equalsIgnoreCase for comparisons

2016-06-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13274:
---

 Summary: Filesystem.checkPath should use 
StringUtils.equalsIgnoreCase for comparisons
 Key: HADOOP-13274
 URL: https://issues.apache.org/jira/browse/HADOOP-13274
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


{{Filesystem.checkPath}} compares URI elements using 
{{String.equalsIgnoreCase()}}, so is brittle against i8n locale changes.

It should move to {{StringUtils.equalsIgnoreCase}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331555#comment-15331555
 ] 

Hadoop QA commented on HADOOP-13255:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-common-project: The patch generated 1 new + 234 
unchanged - 0 fixed = 235 total (was 234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 45s{color} 
| {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810772/HADOOP-13255.test.patch
 |
| JIRA Issue | HADOOP-13255 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c07cb7298536 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 25064fb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9782/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
| unit | 

  1   2   >