[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941584#comment-15941584
 ] 

Hadoop QA commented on HADOOP-13363:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
24s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} root: The patch generated 0 new + 27 unchanged - 1 
fixed = 27 total (was 28) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860511/HADOOP-13363.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 30dd0a8e3236 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941569#comment-15941569
 ] 

Hadoop QA commented on HADOOP-14240:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 165 unchanged - 9 fixed = 165 total (was 174) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860510/HADOOP-14240.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5530ecec0be1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 84ddedc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11930/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11930/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> 

[jira] [Updated] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-03-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13363:

Attachment: HADOOP-13363.002.patch

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-03-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13363:

Affects Version/s: 3.0.0-alpha2

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941548#comment-15941548
 ] 

Jonathan Eagles commented on HADOOP-14240:
--

The majority of this optimization relies on the direct return value 
construction since it saves the construction cost of ArrayList and conversion 
from ArrayList to String[]

> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> URL: https://issues.apache.org/jira/browse/HADOOP-14240
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14240.1.patch, HADOOP-14240.2.patch
>
>
> The string array return value can be more efficiently determined and some 
> general redundancies can be removed to improve the speed for 
> Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14240:
-
Attachment: HADOOP-14240.2.patch

> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> URL: https://issues.apache.org/jira/browse/HADOOP-14240
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14240.1.patch, HADOOP-14240.2.patch
>
>
> The string array return value can be more efficiently determined and some 
> general redundancies can be removed to improve the speed for 
> Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941536#comment-15941536
 ] 

Hadoop QA commented on HADOOP-14240:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 165 unchanged - 9 fixed = 166 total (was 174) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860504/HADOOP-14240.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 00091c0cf860 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 84ddedc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11929/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11929/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11929/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11929/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Configuration#get return 

[jira] [Updated] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuyuki Tanimura updated HADOOP-14237:
---
Description: 
When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
getting the instance profile credentials, eventually all jobs on the cluster 
fail. Since a number of S3A clients (all mappers and reducers) try to get the 
credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
codes.

SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
but it still does not share the credentials with other EC2 nodes / JVM 
processes.

This issue prevents users from creating Hadoop clusters on EC2

  was:
When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
getting the instance profile credentials, eventually all jobs on the cluster 
fail. Since a number of S3A clients (all mappers and reducers) try to get the 
credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
codes.

SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
but it still does not share the credentials with other EC2 nodes / processes.

This issue prevents users from creating Hadoop clusters on EC2


> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> ---
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 nodes / JVM 
> processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941523#comment-15941523
 ] 

Kazuyuki Tanimura edited comment on HADOOP-14237 at 3/25/17 2:29 AM:
-

True.

Just to be clear, this patch is for making sure the credentials is shared among 
all Hadoop nodes not only shared within a node.
As I add more nodes to a cluster, it was too easy to hit the AWS account level 
limits.


was (Author: kazuyukitanimura):
True.

Just to be clear, this patch is for making sure the credentials is shared among 
all Hadoop nodes not only shared within a node.
As I add more nodes to a cluster, it was too easy to hit the account level 
limits.

> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> ---
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 nodes / processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941523#comment-15941523
 ] 

Kazuyuki Tanimura commented on HADOOP-14237:


True.

Just to be clear, this patch is for making sure the credentials is shared among 
all Hadoop nodes not only shared within a node.
As I add more nodes to a cluster, it was too easy to hit the account level 
limits.

> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> ---
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 nodes / processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuyuki Tanimura updated HADOOP-14237:
---
Description: 
When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
getting the instance profile credentials, eventually all jobs on the cluster 
fail. Since a number of S3A clients (all mappers and reducers) try to get the 
credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
codes.

SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
but it still does not share the credentials with other EC2 nodes / processes.

This issue prevents users from creating Hadoop clusters on EC2

  was:
When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
getting the instance profile credentials, eventually all jobs on the cluster 
fail. Since a number of S3A clients (all mappers and reducers) try to get the 
credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
codes.

SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
but it still does not share the credentials with other EC2 instances / 
processes.

This issue prevents users from creating Hadoop clusters on EC2


> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> ---
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 nodes / processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuyuki Tanimura updated HADOOP-14237:
---
Summary: S3A Support Shared Instance Profile Credentials Across All Hadoop 
Nodes  (was: S3A Support Shared Instance Profile Credentials Across All 
Instances)

> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> ---
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 instances / 
> processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Instances

2017-03-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941514#comment-15941514
 ] 

Mingliang Liu commented on HADOOP-14237:


In latest aws-java-sdk, it's a singleton instance so we can live without 
SharedInstanceProfileCredentialsProvider. 

> S3A Support Shared Instance Profile Credentials Across All Instances
> 
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 instances / 
> processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941484#comment-15941484
 ] 

Jonathan Eagles commented on HADOOP-14240:
--

In my testing Configuration#get is roughly 25%-30% faster with the patch 
applied.

> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> URL: https://issues.apache.org/jira/browse/HADOOP-14240
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14240.1.patch
>
>
> The string array return value can be more efficiently determined and some 
> general redundancies can be removed to improve the speed for 
> Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14240:
-
Status: Patch Available  (was: Open)

> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> URL: https://issues.apache.org/jira/browse/HADOOP-14240
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14240.1.patch
>
>
> The string array return value can be more efficiently determined and some 
> general redundancies can be removed to improve the speed for 
> Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14240:
-
Attachment: HADOOP-14240.1.patch

> Configuration#get return value optimization
> ---
>
> Key: HADOOP-14240
> URL: https://issues.apache.org/jira/browse/HADOOP-14240
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14240.1.patch
>
>
> The string array return value can be more efficiently determined and some 
> general redundancies can be removed to improve the speed for 
> Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14240) Configuration#get return value optimization

2017-03-24 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-14240:


 Summary: Configuration#get return value optimization
 Key: HADOOP-14240
 URL: https://issues.apache.org/jira/browse/HADOOP-14240
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles


The string array return value can be more efficiently determined and some 
general redundancies can be removed to improve the speed for Configuration.get.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2017-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941473#comment-15941473
 ] 

Hudson commented on HADOOP-10101:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11461/])
HADOOP-10101. Update guava dependency to the latest version. (ozawa) (ozawa: 
rev 84ddedc0b2d58257d45c16ee5e83b15f94a7ba3a)
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ChildReaper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MsInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsRegistry.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/MetricsCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/AbstractMetric.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsTag.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclTransformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/AbstractMetricsRecord.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsInfoImpl.java


> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.014.patch, 
> HADOOP-10101.015.patch, HADOOP-10101.016.patch, HADOOP-10101.017.patch, 
> HADOOP-10101.018.patch, HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This issue tries to 
> update the version to as latest version as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14239) S3A Retry Multiple S3 Key Deletion

2017-03-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941466#comment-15941466
 ] 

ASF GitHub Bot commented on HADOOP-14239:
-

GitHub user kazuyukitanimura opened a pull request:

https://github.com/apache/hadoop/pull/208

HADOOP-14239. S3A Retry Multiple S3 Key Deletion

Hi @steveloughran 

Sorry for sending may requests.

I explained the problem here 
https://issues.apache.org/jira/browse/HADOOP-14239

This pull requests recursively retries to delete only S3 keys that are 
previously failed to delete during the multiple object deletion because 
aws-java-sdk retry does not help. If it still fails, it will fall back to the 
single deletion.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomreach/hadoop HADOOP-14239

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/208.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #208


commit 707773b6e14b61b31ecd5473eaafa75dd5217707
Author: kazu 
Date:   2017-03-25T01:29:19Z

HADOOP-14239. S3A Retry Multiple S3 Key Deletion




> S3A Retry Multiple S3 Key Deletion
> --
>
> Key: HADOOP-14239
> URL: https://issues.apache.org/jira/browse/HADOOP-14239
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When fs.s3a.multiobjectdelete.enable == true, It tries to delete multiple S3 
> keys at once.
> Although this is a great feature, it becomes problematic when AWS fails 
> deleting some S3 keys out of the deletion list. The aws-java-sdk internally 
> retries to delete them, but it does not help because it simply retries the 
> same list of S3 keys including the successfully deleted ones. In that case, 
> all successive retries fail deleting previously deleted keys since they do 
> not exist any more. Eventually it throws an Exception and leads to a job 
> failure entirely.
> Luckily, the AWS API reports which keys it failed to delete. We should retry 
> only for the keys that failed to be deleted from S3A



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14239) S3A Retry Multiple S3 Key Deletion

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941457#comment-15941457
 ] 

Kazuyuki Tanimura commented on HADOOP-14239:


To be clear, deletion is called from rename() as well. It makes more frequent 
to encounter this issue...

> S3A Retry Multiple S3 Key Deletion
> --
>
> Key: HADOOP-14239
> URL: https://issues.apache.org/jira/browse/HADOOP-14239
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When fs.s3a.multiobjectdelete.enable == true, It tries to delete multiple S3 
> keys at once.
> Although this is a great feature, it becomes problematic when AWS fails 
> deleting some S3 keys out of the deletion list. The aws-java-sdk internally 
> retries to delete them, but it does not help because it simply retries the 
> same list of S3 keys including the successfully deleted ones. In that case, 
> all successive retries fail deleting previously deleted keys since they do 
> not exist any more. Eventually it throws an Exception and leads to a job 
> failure entirely.
> Luckily, the AWS API reports which keys it failed to delete. We should retry 
> only for the keys that failed to be deleted from S3A



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14239) S3A Retry Multiple S3 Key Deletion

2017-03-24 Thread Kazuyuki Tanimura (JIRA)
Kazuyuki Tanimura created HADOOP-14239:
--

 Summary: S3A Retry Multiple S3 Key Deletion
 Key: HADOOP-14239
 URL: https://issues.apache.org/jira/browse/HADOOP-14239
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.8.0, 2.8.1
 Environment: EC2, AWS
Reporter: Kazuyuki Tanimura


When fs.s3a.multiobjectdelete.enable == true, It tries to delete multiple S3 
keys at once.

Although this is a great feature, it becomes problematic when AWS fails 
deleting some S3 keys out of the deletion list. The aws-java-sdk internally 
retries to delete them, but it does not help because it simply retries the same 
list of S3 keys including the successfully deleted ones. In that case, all 
successive retries fail deleting previously deleted keys since they do not 
exist any more. Eventually it throws an Exception and leads to a job failure 
entirely.

Luckily, the AWS API reports which keys it failed to delete. We should retry 
only for the keys that failed to be deleted from S3A



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API

2017-03-24 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-14238:
---

 Summary: [Umbrella] Rechecking Guava's object is not exposed to 
user-facing API
 Key: HADOOP-14238
 URL: https://issues.apache.org/jira/browse/HADOOP-14238
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


This is reported by [~hitesh] on HADOOP-10101.
At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version

2017-03-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-10101:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks Nicholas and Steve for your review, and thanks 
people who joined this issue for your comments.

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.014.patch, 
> HADOOP-10101.015.patch, HADOOP-10101.016.patch, HADOOP-10101.017.patch, 
> HADOOP-10101.018.patch, HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This issue tries to 
> update the version to as latest version as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Instances

2017-03-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941404#comment-15941404
 ] 

ASF GitHub Bot commented on HADOOP-14237:
-

GitHub user kazuyukitanimura opened a pull request:

https://github.com/apache/hadoop/pull/207

HADOOP-14237. S3A Support Shared Instance Profile Credentials Across All 
Instances

Hi @steveloughran 

Yet another patch that I made a few months back.
I explained the issue at https://issues.apache.org/jira/browse/HADOOP-14237

This pull request is aiming for more open discussions rather than a 
complete solution. It would be great if you could offer your thoughts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomreach/hadoop HADOOP-14237

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #207


commit e3cfeaeddf99aba69e64f502b3d1332dc09b10f4
Author: kazu 
Date:   2017-03-25T00:37:03Z

HADOOP-14237. S3A Support Shared Instance Profile Credentials Across All 
Instances




> S3A Support Shared Instance Profile Credentials Across All Instances
> 
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
> getting the instance profile credentials, eventually all jobs on the cluster 
> fail. Since a number of S3A clients (all mappers and reducers) try to get the 
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
> but it still does not share the credentials with other EC2 instances / 
> processes.
> This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941396#comment-15941396
 ] 

Hadoop QA commented on HADOOP-14236:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
48s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 5 unchanged - 2 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14236 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860491/HADOOP-14236-HADOOP-13345.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 61d4fa734346 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / ed15aba |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11928/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11928/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11928/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries 
> in metadata store
> ---
>
> Key: HADOOP-14236
> URL: https://issues.apache.org/jira/browse/HADOOP-14236
> Project: Hadoop Common
>

[jira] [Created] (HADOOP-14237) S3A Support Shared Instance Profile Credentials Across All Instances

2017-03-24 Thread Kazuyuki Tanimura (JIRA)
Kazuyuki Tanimura created HADOOP-14237:
--

 Summary: S3A Support Shared Instance Profile Credentials Across 
All Instances
 Key: HADOOP-14237
 URL: https://issues.apache.org/jira/browse/HADOOP-14237
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.8.0, 2.8.1
 Environment: EC2, AWS
Reporter: Kazuyuki Tanimura


When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails 
getting the instance profile credentials, eventually all jobs on the cluster 
fail. Since a number of S3A clients (all mappers and reducers) try to get the 
credentials, the AWS credential endpoint starts responding 5xx and 4xx error 
codes.

SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it, 
but it still does not share the credentials with other EC2 instances / 
processes.

This issue prevents users from creating Hadoop clusters on EC2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14236:
---
Status: Patch Available  (was: Open)

> S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries 
> in metadata store
> ---
>
> Key: HADOOP-14236
> URL: https://issues.apache.org/jira/browse/HADOOP-14236
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-14236-HADOOP-13345.000.patch
>
>
> After running integration test {{ITestS3AFileSystemContract}}, I found the 
> following items are not cleaned up in DynamoDB:
> {code}
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
>  child=subdir
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
>  child=file2
> {code}
> At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
> need to be careful when cleaning up test data.
> Then I found it’s a bug(?) in the code of integrating S3Guard with 
> S3AFileSystem: for rename we miss sub-directory items to put (dest) and 
> delete (src). The reason is that in S3A, we delete those fake directory 
> objects if they are not necessary, e.g. non-empty. So when we list the 
> objects to rename, the object summaries will only return _file_ objects. This 
> has two consequences after rename:
> #  there will be left items for src path in metadata store - left-overs will 
> confuse {{get(Path)}} which should return null
> # we are not persisting the whole subtree for dest path to metadata store - 
> this will break the DynamoDBMetadataStore invariant: _if a path exists, all 
> its ancestors will also exist in the table_.
> Existing tests are not complaining about this though. If this is a real bug, 
> let’s address it here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14236:
---
Attachment: HADOOP-14236-HADOOP-13345.000.patch

V0 patch is to present what I'm thinking. Ping [~ste...@apache.org] and 
[~fabbri] for discussion.

> S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries 
> in metadata store
> ---
>
> Key: HADOOP-14236
> URL: https://issues.apache.org/jira/browse/HADOOP-14236
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-14236-HADOOP-13345.000.patch
>
>
> After running integration test {{ITestS3AFileSystemContract}}, I found the 
> following items are not cleaned up in DynamoDB:
> {code}
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
>  child=subdir
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
>  child=file2
> {code}
> At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
> need to be careful when cleaning up test data.
> Then I found it’s a bug(?) in the code of integrating S3Guard with 
> S3AFileSystem: for rename we miss sub-directory items to put (dest) and 
> delete (src). The reason is that in S3A, we delete those fake directory 
> objects if they are not necessary, e.g. non-empty. So when we list the 
> objects to rename, the object summaries will only return _file_ objects. This 
> has two consequences after rename:
> #  there will be left items for src path in metadata store - left-overs will 
> confuse {{get(Path)}} which should return null
> # we are not persisting the whole subtree for dest path to metadata store - 
> this will break the DynamoDBMetadataStore invariant: _if a path exists, all 
> its ancestors will also exist in the table_.
> Existing tests are not complaining about this though. If this is a real bug, 
> let’s address it here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2017-03-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941363#comment-15941363
 ] 

Tsuyoshi Ozawa commented on HADOOP-10101:
-

Thanks Nicholas for taking a look. Checking this in within 2 hours.

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.014.patch, 
> HADOOP-10101.015.patch, HADOOP-10101.016.patch, HADOOP-10101.017.patch, 
> HADOOP-10101.018.patch, HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This issue tries to 
> update the version to as latest version as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14236:
---
Description: 
After running integration test {{ITestS3AFileSystemContract}}, I found the 
following items are not cleaned up in DynamoDB:
{code}
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
 child=subdir
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
 child=file2
{code}
At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
need to be careful when cleaning up test data.

Then I found it’s a bug(?) in the code of integrating S3Guard with 
S3AFileSystem: for rename we miss sub-directory items to put (dest) and delete 
(src). The reason is that in S3A, we delete those fake directory objects if 
they are not necessary, e.g. non-empty. So when we list the objects to rename, 
the object summaries will only return _file_ objects. This has two consequences 
after rename:
#  there will be left items for src path in metadata store - left-overs will 
confuse {{get(Path)}} which should return null
# we are not persisting the whole subtree for dest path to metadata store - 
this will break the DynamoDBMetadataStore invariant: _if a path exists, all its 
ancestors will also exist in the table_.

Existing tests are not complaining about this though. If this is a real bug, 
let’s address it here.

  was:
After running integration test {{ITestS3AFileSystemContract}}, I found the 
following items are not cleaned up in DynamoDB:
{code}
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
 child=subdir
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
 child=file2
{code}
At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
need to be careful when cleaning up test data.

Then I found it’s a bug in the code of integrating S3Guard with S3AFileSystem: 
for rename we miss sub-directory items to put (dest) and delete (src). The 
reason is that in S3A, we delete those fake directory objects if they are not 
necessary, e.g. non-empty. So when we list the objects to rename, the object 
summaries will only return _file_ objects. This has two consequences after 
rename:
#  there will be left items for src path in metadata store - left-overs will 
confuse {{get(Path)}} which should return null
# we are not persisting the whole subtree for dest path to metadata store - 
this will break the DynamoDBMetadataStore invariant: _if a path exists, all its 
ancestors will also exist in the table_.

Existing tests are not complaining about this though. If this is a real bug, 
let’s address it here.


> S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries 
> in metadata store
> ---
>
> Key: HADOOP-14236
> URL: https://issues.apache.org/jira/browse/HADOOP-14236
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
>
> After running integration test {{ITestS3AFileSystemContract}}, I found the 
> following items are not cleaned up in DynamoDB:
> {code}
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
>  child=subdir
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
>  child=file2
> {code}
> At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
> need to be careful when cleaning up test data.
> Then I found it’s a bug(?) in the code of integrating S3Guard with 
> S3AFileSystem: for rename we miss sub-directory items to put (dest) and 
> delete (src). The reason is that in S3A, we delete those fake directory 
> objects if they are not necessary, e.g. non-empty. So when we list the 
> objects to rename, the object summaries will only return _file_ objects. This 
> has two consequences after rename:
> #  there will be left items for src path in metadata store - left-overs will 
> confuse {{get(Path)}} which should return null
> # we are not persisting the whole subtree for dest path to metadata store - 
> this will break the DynamoDBMetadataStore invariant: _if a path exists, all 
> its ancestors will also exist in the table_.
> Existing tests are not complaining about this though. If this is a real bug, 
> let’s address it here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14236:
---
Priority: Critical  (was: Major)

> S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries 
> in metadata store
> ---
>
> Key: HADOOP-14236
> URL: https://issues.apache.org/jira/browse/HADOOP-14236
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
>
> After running integration test {{ITestS3AFileSystemContract}}, I found the 
> following items are not cleaned up in DynamoDB:
> {code}
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
>  child=subdir
> parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
>  child=file2
> {code}
> At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
> need to be careful when cleaning up test data.
> Then I found it’s a bug in the code of integrating S3Guard with 
> S3AFileSystem: for rename we miss sub-directory items to put (dest) and 
> delete (src). The reason is that in S3A, we delete those fake directory 
> objects if they are not necessary, e.g. non-empty. So when we list the 
> objects to rename, the object summaries will only return _file_ objects. This 
> has two consequences after rename:
> #  there will be left items for src path in metadata store - left-overs will 
> confuse {{get(Path)}} which should return null
> # we are not persisting the whole subtree for dest path to metadata store - 
> this will break the DynamoDBMetadataStore invariant: _if a path exists, all 
> its ancestors will also exist in the table_.
> Existing tests are not complaining about this though. If this is a real bug, 
> let’s address it here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14236) S3Guard: S3AFileSystem::rename() should move non-listed sub-directory entries in metadata store

2017-03-24 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14236:
--

 Summary: S3Guard: S3AFileSystem::rename() should move non-listed 
sub-directory entries in metadata store
 Key: HADOOP-14236
 URL: https://issues.apache.org/jira/browse/HADOOP-14236
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Mingliang Liu
Assignee: Mingliang Liu


After running integration test {{ITestS3AFileSystemContract}}, I found the 
following items are not cleaned up in DynamoDB:
{code}
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExisting/dir,
 child=subdir
parent=/mliu-s3guard/user/mliu/s3afilesystemcontract/testRenameDirectoryAsExistingNew/newdir/subdir,
 child=file2
{code}
At first I thought it’s similar to [HADOOP-14226] or [HADOOP-14227], and we 
need to be careful when cleaning up test data.

Then I found it’s a bug in the code of integrating S3Guard with S3AFileSystem: 
for rename we miss sub-directory items to put (dest) and delete (src). The 
reason is that in S3A, we delete those fake directory objects if they are not 
necessary, e.g. non-empty. So when we list the objects to rename, the object 
summaries will only return _file_ objects. This has two consequences after 
rename:
#  there will be left items for src path in metadata store - left-overs will 
confuse {{get(Path)}} which should return null
# we are not persisting the whole subtree for dest path to metadata store - 
this will break the DynamoDBMetadataStore invariant: _if a path exists, all its 
ancestors will also exist in the table_.

Existing tests are not complaining about this though. If this is a real bug, 
let’s address it here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14235) S3A Path does not understand colon (:) when globbing

2017-03-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941346#comment-15941346
 ] 

ASF GitHub Bot commented on HADOOP-14235:
-

GitHub user kazuyukitanimura opened a pull request:

https://github.com/apache/hadoop/pull/206

HADOOP-14235. S3A Path does not understand colon (:) when globbing

Hi @steveloughran (not sure who else I need to involve here)

I explained the issue at https://issues.apache.org/jira/browse/HADOOP-14235

This pull request fixes the issue and does not break other things as far as 
I know. (I also ran the unit tests).

Probably, #204 should also fix this issue. This pull request is for a 
short-term solution in case anyone is interested.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomreach/hadoop HADOOP-14235

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #206


commit ff37d08ed314d7b2e9b7d8aff648e38e21fceacb
Author: kazu 
Date:   2017-03-24T23:36:01Z

HADOOP-14235. S3A Path does not understand colon (:) when globbing




> S3A Path does not understand colon (:) when globbing
> 
>
> Key: HADOOP-14235
> URL: https://issues.apache.org/jira/browse/HADOOP-14235
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> S3 paths, colons ":" are valid character in S3 paths. However, the Java URI 
> class, which is used in the Path class, does not allow it.
> This becomes a problem particularly when we are globbing S3 paths. The 
> globber thinks paths with colons are invalid paths and throws 
> URISyntaxException.
> The reason is we are sharing Globber.java with all other Fs. Some of the 
> rules for regular Fs are not applicable to S3 just like this colon as an 
> example.
> Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061
> The good news is I have a one line fix that I am about to send a pull request.
> However, for a right fix, we should separate the S3 globber from the 
> Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14235) S3A Path does not understand colon (:) when globbing

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuyuki Tanimura updated HADOOP-14235:
---
Environment: EC2, AWS

> S3A Path does not understand colon (:) when globbing
> 
>
> Key: HADOOP-14235
> URL: https://issues.apache.org/jira/browse/HADOOP-14235
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
>Reporter: Kazuyuki Tanimura
>
> S3 paths, colons ":" are valid character in S3 paths. However, the Java URI 
> class, which is used in the Path class, does not allow it.
> This becomes a problem particularly when we are globbing S3 paths. The 
> globber thinks paths with colons are invalid paths and throws 
> URISyntaxException.
> The reason is we are sharing Globber.java with all other Fs. Some of the 
> rules for regular Fs are not applicable to S3 just like this colon as an 
> example.
> Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061
> The good news is I have a one line fix that I am about to send a pull request.
> However, for a right fix, we should separate the S3 globber from the 
> Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14235) S3A Path does not understand colon (:) when globbing

2017-03-24 Thread Kazuyuki Tanimura (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuyuki Tanimura updated HADOOP-14235:
---
Description: 
S3 paths, colons ":" are valid character in S3 paths. However, the Java URI 
class, which is used in the Path class, does not allow it.

This becomes a problem particularly when we are globbing S3 paths. The globber 
thinks paths with colons are invalid paths and throws URISyntaxException.

The reason is we are sharing Globber.java with all other Fs. Some of the rules 
for regular Fs are not applicable to S3 just like this colon as an example.

Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061

The good news is I have a one line fix that I am about to send a pull request.

However, for a right fix, we should separate the S3 globber from the 
Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371

  was:
S3 paths, colons (:) are valid character in S3 paths. However, the Java URI 
class, which is used in the Path class, does not allow it.

This becomes a problem particularly when we are globbing S3 paths. The globber 
thinks paths with colons are invalid paths and throws URISyntaxException.

The reason is we are sharing Globber.java with all other Fs. Some of the rules 
for regular Fs are not applicable to S3 just like this colon as an example.

Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061

The good news is I have a one line fix that I am about to send a pull request.

However, for a right fix, we should separate the S3 globber from the 
Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371


> S3A Path does not understand colon (:) when globbing
> 
>
> Key: HADOOP-14235
> URL: https://issues.apache.org/jira/browse/HADOOP-14235
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
>Reporter: Kazuyuki Tanimura
>
> S3 paths, colons ":" are valid character in S3 paths. However, the Java URI 
> class, which is used in the Path class, does not allow it.
> This becomes a problem particularly when we are globbing S3 paths. The 
> globber thinks paths with colons are invalid paths and throws 
> URISyntaxException.
> The reason is we are sharing Globber.java with all other Fs. Some of the 
> rules for regular Fs are not applicable to S3 just like this colon as an 
> example.
> Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061
> The good news is I have a one line fix that I am about to send a pull request.
> However, for a right fix, we should separate the S3 globber from the 
> Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14235) S3A Path does not understand colon (:) when globbing

2017-03-24 Thread Kazuyuki Tanimura (JIRA)
Kazuyuki Tanimura created HADOOP-14235:
--

 Summary: S3A Path does not understand colon (:) when globbing
 Key: HADOOP-14235
 URL: https://issues.apache.org/jira/browse/HADOOP-14235
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.8.0, 2.8.1
Reporter: Kazuyuki Tanimura


S3 paths, colons (:) are valid character in S3 paths. However, the Java URI 
class, which is used in the Path class, does not allow it.

This becomes a problem particularly when we are globbing S3 paths. The globber 
thinks paths with colons are invalid paths and throws URISyntaxException.

The reason is we are sharing Globber.java with all other Fs. Some of the rules 
for regular Fs are not applicable to S3 just like this colon as an example.

Same issue is reported here https://issues.apache.org/jira/browse/SPARK-20061

The good news is I have a one line fix that I am about to send a pull request.

However, for a right fix, we should separate the S3 globber from the 
Globber.java as proposed at https://issues.apache.org/jira/browse/HADOOP-13371



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14213) Move Configuration runtime check for hadoop-site.xml to initialization

2017-03-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14213:
-
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha3

> Move Configuration runtime check for hadoop-site.xml to initialization
> --
>
> Key: HADOOP-14213
> URL: https://issues.apache.org/jira/browse/HADOOP-14213
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14213.1.patch, HADOOP-14213.2.patch
>
>
> Each Configuration object that loads defaults checks for hadoop-site.xml. It 
> has been long deprecated and is not present in most if not nearly all 
> installations. The getResource check for hadoop-site.xml has to check the 
> entire classpath since it is not found. This jira proposes to 1) either 
> remove hadoop-site.xml as a default resource or 2) move the check to static 
> initialization of the class so the performance hit is only taken once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-03-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13817:
-
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha3

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-03-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13075:
-
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha3

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-14211:
---
Fix Version/s: 2.8.1
   2.7.4

Thanks [~xkrogen] for the work and [~andrew.wang] for the review. +1 on the 
patch as well. I just cherry-picked to branch-2.8 and branch-2.7.

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14175) NPE when ADL store URI contains underscore

2017-03-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reassigned HADOOP-14175:
--

Assignee: Xiaobing Zhou

> NPE when ADL store URI contains underscore
> --
>
> Key: HADOOP-14175
> URL: https://issues.apache.org/jira/browse/HADOOP-14175
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Xiaobing Zhou
>Priority: Minor
>  Labels: newbie, supportability
>
> Please note the underscore {{_}} in the store name {{jzhuge_adls}}. Same NPE 
> wherever the underscore in the URI.
> {noformat}
> $ bin/hadoop fs -ls adl://jzhuge_adls.azuredatalakestore.net/
> -ls: Fatal internal error
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.adl.AdlFileSystem.initialize(AdlFileSystem.java:145)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3257)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3306)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3274)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941133#comment-15941133
 ] 

Hudson commented on HADOOP-14230:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11458 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11458/])
HADOOP-14230. TestAdlFileSystemContractLive fails to clean up. (jzhuge: rev 
d1b7439b48caa18d64a94be1ad5e4927ce573ab8)
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java


> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941132#comment-15941132
 ] 

Hadoop QA commented on HADOOP-13786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 36 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
58s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 53s{color} 
| {color:red} root generated 14 new + 761 unchanged - 2 fixed = 775 total (was 
763) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 20s{color} | {color:orange} root: The patch generated 129 new + 98 unchanged 
- 14 fixed = 227 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 46s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 44s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
|   | hadoop.fs.s3a.commit.staging.TestStagingMRJob |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13786 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860426/HADOOP-13786-HADOOP-13345-020.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 747de026f246 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

Description: 
TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}


  was:
TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{nonformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}



> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp 

[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, and branch-2.8.

Thanks [~liuml07] for the review. I filed HADOOP-14234 for ADLS enhancements to 
make after HADOOP-14180 is complete.

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14234) Improve ADLS FileSystem tests with JUnit4

2017-03-24 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14234:
---

 Summary: Improve ADLS FileSystem tests with JUnit4
 Key: HADOOP-14234
 URL: https://issues.apache.org/jira/browse/HADOOP-14234
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


HADOOP-14180 switches FileSystem contract tests to JUnit4 and makes various 
enhancements. Improve ADLS FileSystem contract tests based on that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14217) Object Storage: support colon in object path

2017-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941070#comment-15941070
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-14217:
--

Unable to support ':' is a long standing issue, as early as HDFS-13.  I have a 
patch 
([2066_20071022.patch|https://issues.apache.org/jira/secure/attachment/12368184/2066_20071022.patch])
 there (wow) almost 10 years ago.

One problem is that the general URI syntax is very general.  We could safely 
assume that our URIs are [hierarchical 
URIs|http://docs.oracle.com/javase/8/docs/api/java/net/URI.html], i.e. 
{code}
[scheme:][//authority][path][?query][#fragment] 
{code}
or even
{code}
[[scheme:]//authority]path
{code}
Then, the problem becomes fixable as shown in [this 
comment|https://issues.apache.org/jira/browse/HDFS-13?focusedCommentId=12536875=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12536875].
  

> Object Storage: support colon in object path
> 
>
> Key: HADOOP-14217
> URL: https://issues.apache.org/jira/browse/HADOOP-14217
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Reporter: Genmao Yu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940985#comment-15940985
 ] 

Hudson commented on HADOOP-13715:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11456 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11456/])
HADOOP-13715. Add isErasureCoded() API to FileStatus class. Contributed (wang: 
rev 52b00600df921763725396ed92194d3338167655)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewfsFileStatus.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AGetFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java


> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch, 
> HADOOP-13715.03.patch, HADOOP-13715.04.patch, HADOOP-13715.05.patch, 
> HADOOP-13715.06.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Patch Available  (was: Open)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-020.patch

patch #20: test changes; nothing to the production code

* lambdas -> anonymous classes so things would backport to branch-2
* fault injection class created for ability to fail any of the IT test 
operations, see what happens in the protocols.
*  IT test of MR job, {{AbstractITCommitMRJob}} based on the mock one: sharing 
core mini cluster setup, test case modified to look for final data. Currently 
just staging, possible for the others too. Twill need separate assertions. 
Maybe this can just be parameterized.
* {{StorageStatisticsTracker}} to snapshot/diff storage stats. I'd hoped to use 
this for tracking IO in MR job, but as MR doesn't collect storage stats, not 
yet used. It should be useful in other tests though.
   
The new IT MR test works, once I've enabled the -unique-filenames option in the 
test. I've also turned on the same switch for the protocol tests: That doesn't 
work for the mapper tests.

{code}
java.io.FileNotFoundException: index file in 
s3a://hwdev-steve-ireland-new/test/ITestDirectoryCommitProtocol-testMapFileOutputCommitter/part-m-0:
 not found 
s3a://hwdev-steve-ireland-new/test/ITestDirectoryCommitProtocol-testMapFileOutputCommitter/part-m-0/index
 in 
s3a://hwdev-steve-ireland-new/test/ITestDirectoryCommitProtocol-testMapFileOutputCommitter/part-m-0

at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:779)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:757)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.validateMapFileOutputContent(AbstractITCommitProtocol.java:607)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.testMapFileOutputCommitter(AbstractITCommitProtocol.java:947)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a://hwdev-steve-ireland-new/test/ITestDirectoryCommitProtocol-testMapFileOutputCommitter/part-m-0/index
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:1906)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:1802)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1764)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:773)
{code}



> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, 

[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Open  (was: Patch Available)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940953#comment-15940953
 ] 

Hanisha Koneru commented on HADOOP-14233:
-

Thank you [~jeagles]. The patch LGTM. 

It would be good to follow this practice for logging as well. Passing 
concatenated strings into a logging method can also incur a needless 
performance hit because the concatenation will be performed every time the 
method is called, whether or not the log level is set low enough to show the 
message.

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14233.1.patch
>
>
> The String in the precondition check is constructed prior to failure 
> detection. Since the normal case is no error, we can gain performance by 
> delaying the construction of the string until the failure is detected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13715:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch, 
> HADOOP-13715.03.patch, HADOOP-13715.04.patch, HADOOP-13715.05.patch, 
> HADOOP-13715.06.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940915#comment-15940915
 ] 

Andrew Wang commented on HADOOP-13715:
--

LGTM +1, committed to trunk, thanks for the great contribution Manoj!

Do you mind checking if we have JIRAs filed for these flaky tests? Precommit 
has been really suffering lately. It might be interesting to dive into the EC 
related one.

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch, 
> HADOOP-13715.03.patch, HADOOP-13715.04.patch, HADOOP-13715.05.patch, 
> HADOOP-13715.06.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940906#comment-15940906
 ] 

Mingliang Liu commented on HADOOP-14230:


+1

For {{if (AdlStorageConfiguration.isContractTestEnabled())}} in setUp/tearDown, 
ideally we can use 
{{assume(AdlStorageConfiguration.isContractTestEnabled());}}. However, this is 
JUnit 3, not 4. [HADOOP-14180] will be helpful.

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940896#comment-15940896
 ] 

Hudson commented on HADOOP-14211:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11454 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11454/])
HADOOP-14211. FilterFs and ChRootedFs are too aggressive about enforcing (wang: 
rev 0e556a5ba645570d381beca60114a1239b27d49f)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java


> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940892#comment-15940892
 ] 

Hadoop QA commented on HADOOP-14219:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 14 
new + 11 unchanged - 1 fixed = 25 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-sls in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-14219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860415/HADOOP-14219-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 30c6c012c881 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 96fe940 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 

[jira] [Updated] (HADOOP-14227) S3Guard: ITestS3AConcurrentOps is not cleaning up test data

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14227:
---
Priority: Minor  (was: Major)

Setting this JIRA priority as Minor.

> S3Guard: ITestS3AConcurrentOps is not cleaning up test data
> ---
>
> Key: HADOOP-14227
> URL: https://issues.apache.org/jira/browse/HADOOP-14227
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-14227-HADOOP-13345.000.patch
>
>
> After running {{ITestS3AConcurrentOps}}, the test data is not cleanup in 
> DynamoDB. There are two reasons:
> # The {{ITestS3AConcurrentOps::teardown()}} method is not calling super 
> teardown() method to clean up the default test directory.
> # The {{auxFs}} is not S3Guard aware even though the {{fs}} to test is. 
> That's because the {{auxFs}} is creating a new Configuration object without 
> patching in S3Guard options (via {{maybeEnableS3Guard(conf);}}).
> This JIRA is to clean up the data after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14226) S3Guard: ITestDynamoDBMetadataStoreScale is not cleaning up test data

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14226:
---
Priority: Minor  (was: Major)

Setting this JIRA priority as Minor.

> S3Guard: ITestDynamoDBMetadataStoreScale is not cleaning up test data
> -
>
> Key: HADOOP-14226
> URL: https://issues.apache.org/jira/browse/HADOOP-14226
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-14226-HADOOP-13345.000.patch
>
>
> After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not 
> cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the 
> finally clause though. The reason is that, the internally called method 
> {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an 
> item for the parent dest path:
> {code}
> parent=/fake-bucket, child=moved-here, is_dir=true
> {code}
> In DynamoDBMetadataStore implementation, we assume that _if a path exists, 
> all its ancestors will also exist in the table_. We need to pre-create dest 
> path to maintain this invariant so that test data can be cleaned up 
> successfully.
> I think there may be other tests with the same problem. Let's 
> identify/address them separately.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13688) Stop bundling HTML source code in javadoc JARs

2017-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940879#comment-15940879
 ] 

Andrew Wang commented on HADOOP-13688:
--

I think the 2.8.0 release vote issue might be a bit different, since this 
removes the content from the javadoc jars, and the 2.8.0 tarball had these in 
expanded form. Also, 2.7.2 seemed to be okay and didn't have this change, so 
it's unclear why 2.8.0 blew up.

> Stop bundling HTML source code in javadoc JARs
> --
>
> Key: HADOOP-13688
> URL: https://issues.apache.org/jira/browse/HADOOP-13688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13688.001.patch
>
>
> We generate source code with line numbers for inclusion in the javadoc JARs. 
> Given that there's github and other online viewers, this doesn't seem so 
> useful these days.
> Disabling the "linkSource" option saves us 40MB for the hadoop-common javadoc 
> jar:
> {noformat}
> -rw-r--r-- 1 andrew andrew 98M Oct  5 14:44 
> hadoop-common-3.0.0-alpha2-SNAPSHOT-javadoc.jar
> -rw-r--r-- 1 andrew andrew 58M Oct  5 15:00 
> ./hadoop-common-project/hadoop-common/target/hadoop-common-3.0.0-alpha2-SNAPSHOT-javadoc.jar
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13966) Add ability to start DDB local server in every test

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13966:
---
Release Note: For running integration tests, the command "mvn verify 
-Dscale -Ds3guard -Ddynamodblocal -q" will be a faster DDB without bills to 
pay. In-memory instance will be started automatically. For tests only.

> Add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13966-HADOOP-13345.000.patch, 
> HADOOP-13966-HADOOP-13345.001.patch, HADOOP-13966-HADOOP-13345.002.patch, 
> HADOOP-13966-HADOOP-13345.003.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13966) Add ability to start DDB local server in every test

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13966:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-13345
 Release Note:   (was: For running integration tests, the command "mvn 
verify -Dscale -Ds3guard -Ddynamodblocal -q" will be a faster DDB without bills 
to pay. In-memory instance will be started automatically. For tests only.)
   Status: Resolved  (was: Patch Available)

Thanks [~ste...@apache.org] for reviewing and committing. I revert the v2 patch 
and committed the v3 patch to the feature branch.

> Add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13966-HADOOP-13345.000.patch, 
> HADOOP-13966-HADOOP-13345.001.patch, HADOOP-13966-HADOOP-13345.002.patch, 
> HADOOP-13966-HADOOP-13345.003.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940859#comment-15940859
 ] 

Andrew Wang commented on HADOOP-14211:
--

Cool, thanks for doing the legwork :)

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour updated HADOOP-14219:

Attachment: HADOOP-14219-branch-2.001.patch

Add version for branch-2

> RumenToSLS: parsing problem with crashed attempts
> -
>
> Key: HADOOP-14219
> URL: https://issues.apache.org/jira/browse/HADOOP-14219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
> Attachments: HADOOP-14219.001.patch, HADOOP-14219-branch-2.001.patch
>
>
> In case of crashed task attempts, we may have in rumen logs task attempts 
> with null hostName and finishTime defined to -1
> for example
> {code}
>{
>   "resourceUsageMetrics": {
> "heapUsage": 0,
> "physicalMemoryUsage": 0,
> "virtualMemoryUsage": 0,
> "cumulativeCpuUsage": 0
>   },
>   "vmemKbytes": [],
>   "physMemKbytes": [],
>   "cpuUsages": [],
>   "clockSplits": [],
>   "location": null,
>   "sortFinished": -1,
>   "shuffleFinished": -1,
>   "spilledRecords": -1,
>   "reduceOutputRecords": -1,
>   "reduceShuffleBytes": -1,
>   "fileBytesRead": -1,
>   "hdfsBytesWritten": -1,
>   "hdfsBytesRead": -1,
>   "hostName": null,
>   "finishTime": -1,
>   "startTime": 1489619193378,
>   "result": null,
>   "attemptID": "attempt_1488896259152_410442_r_15_1",
>   "fileBytesWritten": -1,
>   "mapInputRecords": -1,
>   "mapInputBytes": -1,
>   "mapOutputBytes": -1,
>   "mapOutputRecords": -1,
>   "combineInputRecords": -1,
>   "reduceInputGroups": -1,
>   "reduceInputRecords": -1
> }
> {code}
> Jackson parser will automatically consider -1 as a java.lang.Integer. However 
> RumenToSLSConverter make the assumption than jackson has deserialize all 
> timstamp as instance of java.lang.Long, resulting in a ClassCastException.
> RumenToSLSConverter also make the assumption that hostName is not null, so we 
> can also have a NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940857#comment-15940857
 ] 

Erik Krogen commented on HADOOP-14211:
--

[~andrew.wang] thanks for the review/commit! Yeah, I checked, and they don't. 
The validation logic generally became a little stricter in the {{FileContext}} 
APIs. Agreed that nothing else really uses them which is, I assume, why I'm 
uncovering issues that only exist in the "new" APIs and have been fixed in 
{{FileSystem}}...

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version

2017-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10101:
-
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

+1 the 018 patch is perfect, thanks!

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.014.patch, 
> HADOOP-10101.015.patch, HADOOP-10101.016.patch, HADOOP-10101.017.patch, 
> HADOOP-10101.018.patch, HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This issue tries to 
> update the version to as latest version as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14211:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks Erik, committed to trunk and branch-2.

Have you checked if FilterFileSystem and ViewFileSystem suffer from a similar 
problem? Most apps are more comfortable with the FileSystem APIs. I don't know 
anything that uses FileContext besides YARN and MR2.

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13966) Add ability to start DDB local server in every test

2017-03-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13966:
---
Release Note: For running integration tests, the command "mvn verify 
-Dscale -Ds3guard -Ddynamodblocal -q" will be a faster DDB without bills to 
pay. In-memory instance will be started automatically. For tests only.

> Add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-13966-HADOOP-13345.000.patch, 
> HADOOP-13966-HADOOP-13345.001.patch, HADOOP-13966-HADOOP-13345.002.patch, 
> HADOOP-13966-HADOOP-13345.003.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940847#comment-15940847
 ] 

Hadoop QA commented on HADOOP-14211:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14211 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860399/HADOOP-14211.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 955e8fdcaee2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d4f73e7 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11924/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11924/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> 

[jira] [Commented] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940831#comment-15940831
 ] 

Jonathan Eagles commented on HADOOP-14233:
--

My local benchmarking shows a roughly %15 performance gain for 
Configuration.set after applying this patch.

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14233.1.patch
>
>
> The String in the precondition check is constructed prior to failure 
> detection. Since the normal case is no error, we can gain performance by 
> delaying the construction of the string until the failure is detected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14233:
-
Description: The String in the precondition check is constructed prior to 
failure detection. Since the normal case is no error, we can gain performance 
by delaying the construction of the string until the failure is detected.

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14233.1.patch
>
>
> The String in the precondition check is constructed prior to failure 
> detection. Since the normal case is no error, we can gain performance by 
> delaying the construction of the string until the failure is detected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14233:
-
Assignee: Jonathan Eagles
  Status: Patch Available  (was: Open)

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: HADOOP-14233.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14233:
-
Attachment: HADOOP-14233.1.patch

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
> Attachments: HADOOP-14233.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-24 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940814#comment-15940814
 ] 

Manoj Govindassamy commented on HADOOP-13715:
-

Test failures not related to the patch. 

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch, 
> HADOOP-13715.03.patch, HADOOP-13715.04.patch, HADOOP-13715.05.patch, 
> HADOOP-13715.06.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14227) S3Guard: ITestS3AConcurrentOps is not cleaning up test data

2017-03-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940805#comment-15940805
 ] 

Mingliang Liu commented on HADOOP-14227:


Sorry I forgot to mention. Tested against us-west-1. Manually checked the data 
is cleaned up w/ this patch.

Thanks,

> S3Guard: ITestS3AConcurrentOps is not cleaning up test data
> ---
>
> Key: HADOOP-14227
> URL: https://issues.apache.org/jira/browse/HADOOP-14227
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14227-HADOOP-13345.000.patch
>
>
> After running {{ITestS3AConcurrentOps}}, the test data is not cleanup in 
> DynamoDB. There are two reasons:
> # The {{ITestS3AConcurrentOps::teardown()}} method is not calling super 
> teardown() method to clean up the default test directory.
> # The {{auxFs}} is not S3Guard aware even though the {{fs}} to test is. 
> That's because the {{auxFs}} is creating a new Configuration object without 
> patching in S3Guard options (via {{maybeEnableS3Guard(conf);}}).
> This JIRA is to clean up the data after test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14226) S3Guard: ITestDynamoDBMetadataStoreScale is not cleaning up test data

2017-03-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15939462#comment-15939462
 ] 

Mingliang Liu edited comment on HADOOP-14226 at 3/24/17 5:45 PM:
-

Tested: us-west-1.

I manually checked the test and the data was cleaned up successfully w/ this 
patch.


was (Author: liuml07):
I manually checked the test and the data was cleaned up successfully w/ this 
patch.

> S3Guard: ITestDynamoDBMetadataStoreScale is not cleaning up test data
> -
>
> Key: HADOOP-14226
> URL: https://issues.apache.org/jira/browse/HADOOP-14226
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14226-HADOOP-13345.000.patch
>
>
> After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not 
> cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the 
> finally clause though. The reason is that, the internally called method 
> {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an 
> item for the parent dest path:
> {code}
> parent=/fake-bucket, child=moved-here, is_dir=true
> {code}
> In DynamoDBMetadataStore implementation, we assume that _if a path exists, 
> all its ancestors will also exist in the table_. We need to pre-create dest 
> path to maintain this invariant so that test data can be cleaned up 
> successfully.
> I think there may be other tests with the same problem. Let's 
> identify/address them separately.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14233) Delay contsruction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14233:
-
Summary: Delay contsruction of PreCondition.check failure message in 
Configuration#set  (was: Don't pre-construct PreCondition.check failure message 
in Configuration#set)

> Delay contsruction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14038) Rename ADLS credential properties

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940781#comment-15940781
 ] 

Hadoop QA commented on HADOOP-14038:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 46s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
42s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860376/HADOOP-14038.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 369fd3421eb6 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab759e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11920/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11920/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-azure-datalake U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11920/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Updated] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14233:
-
Summary: Delay construction of PreCondition.check failure message in 
Configuration#set  (was: Delay contsruction of PreCondition.check failure 
message in Configuration#set)

> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14233) Don't pre-construct PreCondition.check failure message in Configuration#set

2017-03-24 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-14233:


 Summary: Don't pre-construct PreCondition.check failure message in 
Configuration#set
 Key: HADOOP-14233
 URL: https://issues.apache.org/jira/browse/HADOOP-14233
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Eagles






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Summary: FilterFs and ChRootedFs are too aggressive about enforcing 
"authorityNeeded"  (was: ChRootedFs is too aggressive about enforcing 
"authorityNeeded")

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch
>
>
> Right now {{ChRootedFs}} passes the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS itself is an example 
> of this. In fact you will encounter this issue if you try to nest one ViewFS 
> within another - I can't think of any reason why you would want to do that 
> but there's no reason why you shouldn't be able to and in general ViewFS is 
> making an assumption that it then proves invalid by its own behavior. The 
> {{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
> already an instantiated {{AbstractFileSystem}} which means it has already 
> used the same constructor with the value of {{authorityNeeded}} (and 
> corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Attachment: HADOOP-14211.001.patch

Thanks for pointing that out [~andrew.wang]! I updated the ticket and added a 
new v001 patch to reflect changes to both {{FilterFs}} and {{ChRootedFs}}. This 
makes for a bit more sensical unit test as well.

> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch, HADOOP-14211.001.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"

2017-03-24 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Description: 
Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
{{AbstractFileSystem}} superconstructor:
{code}
super(fs.getUri(), fs.getUri().getScheme(),
fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
{code}
This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
authority, but this isn't necessarily the case - ViewFS is an example of this. 
You will encounter this issue if you try to filter a ViewFS, or nest one ViewFS 
within another. The {{authorityNeeded}} check isn't necessary in this case 
anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} which means it 
has already used the same constructor with the value of {{authorityNeeded}} 
(and corresponding validation) that it actually requires.

  was:
Right now {{ChRootedFs}} passes the following up to the {{AbstractFileSystem}} 
superconstructor:
{code}
super(fs.getUri(), fs.getUri().getScheme(),
fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
{code}
This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
authority, but this isn't necessarily the case - ViewFS itself is an example of 
this. In fact you will encounter this issue if you try to nest one ViewFS 
within another - I can't think of any reason why you would want to do that but 
there's no reason why you shouldn't be able to and in general ViewFS is making 
an assumption that it then proves invalid by its own behavior. The 
{{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
already an instantiated {{AbstractFileSystem}} which means it has already used 
the same constructor with the value of {{authorityNeeded}} (and corresponding 
validation) that it actually requires.


> FilterFs and ChRootedFs are too aggressive about enforcing "authorityNeeded"
> 
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch
>
>
> Right now {{FilterFs}} and {{ChRootedFs}} pass the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS is an example of 
> this. You will encounter this issue if you try to filter a ViewFS, or nest 
> one ViewFS within another. The {{authorityNeeded}} check isn't necessary in 
> this case anyway; {{fs}} is already an instantiated {{AbstractFileSystem}} 
> which means it has already used the same constructor with the value of 
> {{authorityNeeded}} (and corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940740#comment-15940740
 ] 

Hadoop QA commented on HADOOP-14231:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14231 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860378/HADOOP-14231.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d4a141e03cba 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab759e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11921/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11921/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11921/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Using parentheses is not allowed in auth_to_local regex
> ---
>
> Key: HADOOP-14231
> URL: https://issues.apache.org/jira/browse/HADOOP-14231
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>

[jira] [Resolved] (HADOOP-14232) RumenToSLS: rackName may contains slashes

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour resolved HADOOP-14232.
-
Resolution: Fixed

> RumenToSLS: rackName may contains slashes
> -
>
> Key: HADOOP-14232
> URL: https://issues.apache.org/jira/browse/HADOOP-14232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
>
> Rack name may contain slashes, so hostName will contain several slashes.
> Separation between rack name and hostName is the last slash.
> For example: /platform1/pod1/rack1/node1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14232) RumenToSLS: rackName may contains slashes

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour reopened HADOOP-14232:
-

> RumenToSLS: rackName may contains slashes
> -
>
> Key: HADOOP-14232
> URL: https://issues.apache.org/jira/browse/HADOOP-14232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
>
> Rack name may contain slashes, so hostName will contain several slashes.
> Separation between rack name and hostName is the last slash.
> For example: /platform1/pod1/rack1/node1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14232) RumenToSLS: rackName may contains slashes

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour resolved HADOOP-14232.
-
Resolution: Won't Fix

> RumenToSLS: rackName may contains slashes
> -
>
> Key: HADOOP-14232
> URL: https://issues.apache.org/jira/browse/HADOOP-14232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
>
> Rack name may contain slashes, so hostName will contain several slashes.
> Separation between rack name and hostName is the last slash.
> For example: /platform1/pod1/rack1/node1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14232) RumenToSLS: rackName may contains slashes

2017-03-24 Thread Julien Vaudour (JIRA)
Julien Vaudour created HADOOP-14232:
---

 Summary: RumenToSLS: rackName may contains slashes
 Key: HADOOP-14232
 URL: https://issues.apache.org/jira/browse/HADOOP-14232
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Julien Vaudour
Priority: Minor


Rack name may contain slashes, so hostName will contain several slashes.
Separation between rack name and hostName is the last slash.
For example: /platform1/pod1/rack1/node1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13966) Add ability to start DDB local server in every test

2017-03-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940700#comment-15940700
 ] 

Steve Loughran commented on HADOOP-13966:
-

OK, 

+1 on this (I'd already committed it), sorry for getting the wrong one in. 
Mingliang, on the basis that you know what you are doing, I'll delegate the 
rollback to you

> Add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-13966-HADOOP-13345.000.patch, 
> HADOOP-13966-HADOOP-13345.001.patch, HADOOP-13966-HADOOP-13345.002.patch, 
> HADOOP-13966-HADOOP-13345.003.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-03-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940698#comment-15940698
 ] 

Ravi Prakash commented on HADOOP-14163:
---

Thanks Marton for your attention to this often neglected but very important 
facet of Hadoop.
I'd also like to draw your attention to 
https://issues.apache.org/jira/browse/HADOOP-8039 . In my experience, whenever 
I have tried to build the documentation and stage it, the staged files are 
replete with broken links. Is your JIRA going to fix this? Perhaps I'm using 
the wrong command?

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour updated HADOOP-14219:

Status: Open  (was: Patch Available)

> RumenToSLS: parsing problem with crashed attempts
> -
>
> Key: HADOOP-14219
> URL: https://issues.apache.org/jira/browse/HADOOP-14219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
> Attachments: HADOOP-14219.001.patch
>
>
> In case of crashed task attempts, we may have in rumen logs task attempts 
> with null hostName and finishTime defined to -1
> for example
> {code}
>{
>   "resourceUsageMetrics": {
> "heapUsage": 0,
> "physicalMemoryUsage": 0,
> "virtualMemoryUsage": 0,
> "cumulativeCpuUsage": 0
>   },
>   "vmemKbytes": [],
>   "physMemKbytes": [],
>   "cpuUsages": [],
>   "clockSplits": [],
>   "location": null,
>   "sortFinished": -1,
>   "shuffleFinished": -1,
>   "spilledRecords": -1,
>   "reduceOutputRecords": -1,
>   "reduceShuffleBytes": -1,
>   "fileBytesRead": -1,
>   "hdfsBytesWritten": -1,
>   "hdfsBytesRead": -1,
>   "hostName": null,
>   "finishTime": -1,
>   "startTime": 1489619193378,
>   "result": null,
>   "attemptID": "attempt_1488896259152_410442_r_15_1",
>   "fileBytesWritten": -1,
>   "mapInputRecords": -1,
>   "mapInputBytes": -1,
>   "mapOutputBytes": -1,
>   "mapOutputRecords": -1,
>   "combineInputRecords": -1,
>   "reduceInputGroups": -1,
>   "reduceInputRecords": -1
> }
> {code}
> Jackson parser will automatically consider -1 as a java.lang.Integer. However 
> RumenToSLSConverter make the assumption than jackson has deserialize all 
> timstamp as instance of java.lang.Long, resulting in a ClassCastException.
> RumenToSLSConverter also make the assumption that hostName is not null, so we 
> can also have a NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour updated HADOOP-14219:

Attachment: (was: HADOOP-14219-branch-2.6.0.001.patch)

> RumenToSLS: parsing problem with crashed attempts
> -
>
> Key: HADOOP-14219
> URL: https://issues.apache.org/jira/browse/HADOOP-14219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
> Attachments: HADOOP-14219.001.patch
>
>
> In case of crashed task attempts, we may have in rumen logs task attempts 
> with null hostName and finishTime defined to -1
> for example
> {code}
>{
>   "resourceUsageMetrics": {
> "heapUsage": 0,
> "physicalMemoryUsage": 0,
> "virtualMemoryUsage": 0,
> "cumulativeCpuUsage": 0
>   },
>   "vmemKbytes": [],
>   "physMemKbytes": [],
>   "cpuUsages": [],
>   "clockSplits": [],
>   "location": null,
>   "sortFinished": -1,
>   "shuffleFinished": -1,
>   "spilledRecords": -1,
>   "reduceOutputRecords": -1,
>   "reduceShuffleBytes": -1,
>   "fileBytesRead": -1,
>   "hdfsBytesWritten": -1,
>   "hdfsBytesRead": -1,
>   "hostName": null,
>   "finishTime": -1,
>   "startTime": 1489619193378,
>   "result": null,
>   "attemptID": "attempt_1488896259152_410442_r_15_1",
>   "fileBytesWritten": -1,
>   "mapInputRecords": -1,
>   "mapInputBytes": -1,
>   "mapOutputBytes": -1,
>   "mapOutputRecords": -1,
>   "combineInputRecords": -1,
>   "reduceInputGroups": -1,
>   "reduceInputRecords": -1
> }
> {code}
> Jackson parser will automatically consider -1 as a java.lang.Integer. However 
> RumenToSLSConverter make the assumption than jackson has deserialize all 
> timstamp as instance of java.lang.Long, resulting in a ClassCastException.
> RumenToSLSConverter also make the assumption that hostName is not null, so we 
> can also have a NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour updated HADOOP-14219:

Status: Patch Available  (was: Open)

> RumenToSLS: parsing problem with crashed attempts
> -
>
> Key: HADOOP-14219
> URL: https://issues.apache.org/jira/browse/HADOOP-14219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
> Attachments: HADOOP-14219.001.patch
>
>
> In case of crashed task attempts, we may have in rumen logs task attempts 
> with null hostName and finishTime defined to -1
> for example
> {code}
>{
>   "resourceUsageMetrics": {
> "heapUsage": 0,
> "physicalMemoryUsage": 0,
> "virtualMemoryUsage": 0,
> "cumulativeCpuUsage": 0
>   },
>   "vmemKbytes": [],
>   "physMemKbytes": [],
>   "cpuUsages": [],
>   "clockSplits": [],
>   "location": null,
>   "sortFinished": -1,
>   "shuffleFinished": -1,
>   "spilledRecords": -1,
>   "reduceOutputRecords": -1,
>   "reduceShuffleBytes": -1,
>   "fileBytesRead": -1,
>   "hdfsBytesWritten": -1,
>   "hdfsBytesRead": -1,
>   "hostName": null,
>   "finishTime": -1,
>   "startTime": 1489619193378,
>   "result": null,
>   "attemptID": "attempt_1488896259152_410442_r_15_1",
>   "fileBytesWritten": -1,
>   "mapInputRecords": -1,
>   "mapInputBytes": -1,
>   "mapOutputBytes": -1,
>   "mapOutputRecords": -1,
>   "combineInputRecords": -1,
>   "reduceInputGroups": -1,
>   "reduceInputRecords": -1
> }
> {code}
> Jackson parser will automatically consider -1 as a java.lang.Integer. However 
> RumenToSLSConverter make the assumption than jackson has deserialize all 
> timstamp as instance of java.lang.Long, resulting in a ClassCastException.
> RumenToSLSConverter also make the assumption that hostName is not null, so we 
> can also have a NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940690#comment-15940690
 ] 

Hadoop QA commented on HADOOP-13665:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 2 unchanged - 0 fixed = 10 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13665 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860372/HADOOP-13665.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 860d884b355a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab759e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11918/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11918/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11918/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11918/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding codec should support fallback coder
> --
>
> Key: 

[jira] [Commented] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940687#comment-15940687
 ] 

Hadoop QA commented on HADOOP-14219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
52s{color} | {color:red} root in branch-2.6.0 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.6.0 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} branch-2.6.0 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} branch-2.6.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.6.0 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} branch-2.6.0 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-tools/hadoop-sls in branch-2.6.0 has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} branch-2.6.0 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2.6.0 passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-sls in the patch failed with JDK v1.8.0_121. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 12s{color} 
| {color:red} hadoop-sls in the patch failed with JDK v1.8.0_121. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-sls in the patch failed with JDK v1.7.0_121. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-sls in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1464 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
33s{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 12s{color} 
| {color:red} hadoop-sls in the patch failed with JDK v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 88 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:date2017-03-24 |
| JIRA Issue | HADOOP-14219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860381/HADOOP-14219-branch-2.6.0.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c1e223f0d245 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 

[jira] [Updated] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation

2017-03-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14229:
--
Attachment: HADOOP-14229.02.patch

Patch 02:
Make it more compact
Test results:
{code}hadoop-3.0.0-alpha2/bin/hadoop kerbname 
{nn,dn,jn,rm,nm,jhs}/host.dom...@realm.tld
Name: nn/host.dom...@realm.tld to hdfs
Name: dn/host.dom...@realm.tld to hdfs
Name: jn/host.dom...@realm.tld to hdfs
Name: rm/host.dom...@realm.tld to yarn
Name: nm/host.dom...@realm.tld to yarn
Name: jhs/host.dom...@realm.tld to mapred{code}

> hadoop.security.auth_to_local example is incorrect in the documentation
> ---
>
> Key: HADOOP-14229
> URL: https://issues.apache.org/jira/browse/HADOOP-14229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch
>
>
> Let's see jhs as example:
> {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code}
> That means principal has 2 components (jhs/myhost@REALM).
> The second column converts this to jhs@REALM. So the regex will not match on 
> this since regex expects / in the principal.
> My suggestion is
> {code}RULE:[2:$1](jhs)s/.*/mapred/{code}
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14219) RumenToSLS: parsing problem with crashed attempts

2017-03-24 Thread Julien Vaudour (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Vaudour updated HADOOP-14219:

Attachment: HADOOP-14219-branch-2.6.0.001.patch

> RumenToSLS: parsing problem with crashed attempts
> -
>
> Key: HADOOP-14219
> URL: https://issues.apache.org/jira/browse/HADOOP-14219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: Julien Vaudour
>Priority: Minor
> Attachments: HADOOP-14219.001.patch, 
> HADOOP-14219-branch-2.6.0.001.patch
>
>
> In case of crashed task attempts, we may have in rumen logs task attempts 
> with null hostName and finishTime defined to -1
> for example
> {code}
>{
>   "resourceUsageMetrics": {
> "heapUsage": 0,
> "physicalMemoryUsage": 0,
> "virtualMemoryUsage": 0,
> "cumulativeCpuUsage": 0
>   },
>   "vmemKbytes": [],
>   "physMemKbytes": [],
>   "cpuUsages": [],
>   "clockSplits": [],
>   "location": null,
>   "sortFinished": -1,
>   "shuffleFinished": -1,
>   "spilledRecords": -1,
>   "reduceOutputRecords": -1,
>   "reduceShuffleBytes": -1,
>   "fileBytesRead": -1,
>   "hdfsBytesWritten": -1,
>   "hdfsBytesRead": -1,
>   "hostName": null,
>   "finishTime": -1,
>   "startTime": 1489619193378,
>   "result": null,
>   "attemptID": "attempt_1488896259152_410442_r_15_1",
>   "fileBytesWritten": -1,
>   "mapInputRecords": -1,
>   "mapInputBytes": -1,
>   "mapOutputBytes": -1,
>   "mapOutputRecords": -1,
>   "combineInputRecords": -1,
>   "reduceInputGroups": -1,
>   "reduceInputRecords": -1
> }
> {code}
> Jackson parser will automatically consider -1 as a java.lang.Integer. However 
> RumenToSLSConverter make the assumption than jackson has deserialize all 
> timstamp as instance of java.lang.Long, resulting in a ClassCastException.
> RumenToSLSConverter also make the assumption that hostName is not null, so we 
> can also have a NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940634#comment-15940634
 ] 

Hadoop QA commented on HADOOP-14230:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14230 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860375/HADOOP-14230.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d954f79da9e1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab759e9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11919/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11919/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after 

[jira] [Commented] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex

2017-03-24 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940627#comment-15940627
 ] 

Andras Bokor commented on HADOOP-14231:
---

Uploading patch 01. It removes the closing parentheses restriction and adds 
some JUnit test.

> Using parentheses is not allowed in auth_to_local regex
> ---
>
> Key: HADOOP-14231
> URL: https://issues.apache.org/jira/browse/HADOOP-14231
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14231.01.patch
>
>
> I tried to set the following  property for auth_to_local property:
> {code}"RULE:[2:$1]((n|d)n)s/.*/hdfs//{code}
> but I got the following exception:
> {code}Exception in thread "main" java.util.regex.PatternSyntaxException: 
> Unclosed group near index 9
> (nn|dn|jn{code}
> I found that this occurs because {{ruleParser}} in 
> {{org.apache.hadoop.security.authentication.util.KerberosName}} excludes 
> closing parentheses.
> I do not really see the value of excluding parentheses (do I miss something?) 
> so I would remove this restriction to be able to use more regex 
> functionalities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex

2017-03-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14231:
--
Attachment: HADOOP-14231.01.patch

> Using parentheses is not allowed in auth_to_local regex
> ---
>
> Key: HADOOP-14231
> URL: https://issues.apache.org/jira/browse/HADOOP-14231
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14231.01.patch
>
>
> I tried to set the following  property for auth_to_local property:
> {code}"RULE:[2:$1]((n|d)n)s/.*/hdfs//{code}
> but I got the following exception:
> {code}Exception in thread "main" java.util.regex.PatternSyntaxException: 
> Unclosed group near index 9
> (nn|dn|jn{code}
> I found that this occurs because {{ruleParser}} in 
> {{org.apache.hadoop.security.authentication.util.KerberosName}} excludes 
> closing parentheses.
> I do not really see the value of excluding parentheses (do I miss something?) 
> so I would remove this restriction to be able to use more regex 
> functionalities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >