[jira] [Commented] (HADOOP-13434) Add quoting to Shell class

2016-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604283#comment-15604283
 ] 

ASF GitHub Bot commented on HADOOP-13434:
-

Github user aajisaka commented on the issue:

https://github.com/apache/hadoop/pull/119
  
HADOOP-13434 has been fixed. Hi @omalley, would you close this pull request?


> Add quoting to Shell class
> --
>
> Key: HADOOP-13434
> URL: https://issues.apache.org/jira/browse/HADOOP-13434
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13434-branch-2.7.01.patch, HADOOP-13434.patch, 
> HADOOP-13434.patch, HADOOP-13434.patch
>
>
> The Shell class makes assumptions that the parameters won't have spaces or 
> other special characters, even when it invokes bash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604263#comment-15604263
 ] 

Hadoop QA commented on HADOOP-13309:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13309 |
| GITHUB PR | https://github.com/apache/hadoop/pull/138 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux d462d6b59a8d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 5c2f67b |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10888/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12774:
---
Fix Version/s: 3.0.0-alpha2
   2.8.0

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604239#comment-15604239
 ] 

Chris Nauroth commented on HADOOP-13309:


I submitted a fresh pre-commit run just to be sure, since a few patches have 
been committed ahead of this.

https://builds.apache.org/job/PreCommit-HADOOP-Build/10888/

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12774:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 for the patch.  I did another test pass against us-west-2 to verify it.  I 
have committed this to trunk, branch-2 and branch-2.8.  Steve, thank you for 
the patch.

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604198#comment-15604198
 ] 

Hudson commented on HADOOP-13727:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10670 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10670/])
HADOOP-13727. S3A: Reduce high number of connections to EC2 Instance (cnauroth: 
rev d8fa1cfa6722cbf7a4ec3d6b9c44b034da9aa351)
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SharedInstanceProfileCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AWSCredentialProviderList.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAWSCredentialsProvider.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java


> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch, HADOOP-13727-branch-2.005.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604184#comment-15604184
 ] 

Hadoop QA commented on HADOOP-12774:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-12774 |
| GITHUB PR | https://github.com/apache/hadoop/pull/136 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f69ce4a0ad55 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 5b7cbb5 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| findbugs | v3.0.0 |
| JDK v1.7.0_111  Test Results | 

[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604158#comment-15604158
 ] 

Chris Nauroth commented on HADOOP-12774:


I submitted a fresh pre-commit run here:

https://builds.apache.org/job/PreCommit-HADOOP-Build/10887/

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Fix Version/s: 3.0.0-alpha2

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch, HADOOP-13727-branch-2.005.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-24 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604152#comment-15604152
 ] 

Benoy Antony edited comment on HADOOP-12082 at 10/25/16 4:34 AM:
-

I agree [~hgadre]. I started a ReBuild just to be on the safe side.



was (Author: benoyantony):
I agree @Hgarde. I started a ReBuild just to be on the safe side.


> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2-003.patch, 
> HADOOP-12082-branch-2.8-001.patch, HADOOP-12082-branch-2.8-002.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.  Rajesh, thank you for 
reporting the issue and testing the patch.  Steve, thank you for your code 
review.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch, HADOOP-13727-branch-2.005.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-24 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15604152#comment-15604152
 ] 

Benoy Antony commented on HADOOP-12082:
---

I agree @Hgarde. I started a ReBuild just to be on the safe side.


> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2-003.patch, 
> HADOOP-12082-branch-2.8-001.patch, HADOOP-12082-branch-2.8-002.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603969#comment-15603969
 ] 

Hadoop QA commented on HADOOP-8299:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-8299 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835054/HADOOP-8299.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 979a8b738b01 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0a166b1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10885/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10885/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-8299.01.patch
>
>
> We currently assume [a typical viewfs client 
> 

[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603941#comment-15603941
 ] 

Hadoop QA commented on HADOOP-13055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} root: The patch generated 0 new + 129 unchanged - 9 
fixed = 129 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m  
0s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835046/HADOOP-13055.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a8127664312 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9d17585 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10884/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10884/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-8299:
---
Status: Patch Available  (was: Open)

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-8299.01.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-8299:
---
Attachment: HADOOP-8299.01.patch

Attached v01 patch to address the following:
# {{fs.viewfs.mounttable..link./}} is no more an allowed mount 
point. ViewFileSystem construction will throw out an IOException recommending 
to use {{fs.viewfs.mounttable..linkMergeSlash}} instead
# Test to verify {{link./}} configuration throwing IOException with 
recommendation.

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-8299.01.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-8299:
---
Affects Version/s: (was: 2.0.0-alpha)
   3.0.0-alpha1

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-8299.01.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-8299:
---
Target Version/s: 3.0.0-alpha2  (was: 2.0.0-alpha)
 Summary: ViewFileSystem link slash mount point crashes with 
IndexOutOfBoundsException  (was: ViewFs doesn't work with a slash mount point)


ViewFileSystem support for LinkMergeSlash is now tracked by HADOOP-13055. 

Will make use of this bug to address the issue discussed in the previous 
comment -- "link./" mount point throwing IndexOutOfBoundsException.

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8299) ViewFs doesn't work with a slash mount point

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy reassigned HADOOP-8299:
--

Assignee: Manoj Govindassamy

> ViewFs doesn't work with a slash mount point
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603851#comment-15603851
 ] 

Hadoop QA commented on HADOOP-13727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} root: The patch generated 0 new + 4 unchanged - 3 
fixed = 4 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
28s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Assigned] (HADOOP-13756) LocalMetadataStore#put(DirListingMetadata) should also put file metadata into fileHash.

2016-10-24 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-13756:
-

Assignee: Aaron Fabbri

> LocalMetadataStore#put(DirListingMetadata) should also put file metadata into 
> fileHash.
> ---
>
> Key: HADOOP-13756
> URL: https://issues.apache.org/jira/browse/HADOOP-13756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Aaron Fabbri
>
> {{LocalMetadataStore#put(DirListingMetadata)}} only puts the metadata into 
> {{dirHash}}, thus all {{FileStatus}} s are missing from 
> {{LocalMedataStore#fileHash()}}, which makes it confuse to use.
> So in the current way, to correctly put file status into the store (and also 
> set {{authoriative}} flag), you need to run  {code}
> List metas = new ArrayList();
> boolean authorizative = true;
> for (S3AFileStatus status : files) {
>PathMetadata meta = new PathMetadata(status);
>store.put(meta);
> }
> DirListingMetadata dirMeta = new DirMeta(parent, metas, authorizative);
> store.put(dirMeta);
> {code}
> Since solely calling {{store.put(dirMeta)}} is not correct, and calling 
> {{store.put(dirMeta);}} after putting all sub-file {{FileStatuss}} does the 
> repetitive jobs. Can we just use a {{put(PathMetadata)}} and a 
> {{get/setAuthorative()}}   in the MetadataStore interface instead?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13756) LocalMetadataStore#put(DirListingMetadata) should also put file metadata into fileHash.

2016-10-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603849#comment-15603849
 ] 

Aaron Fabbri commented on HADOOP-13756:
---

Hi [~eddyxu].. Thanks for putting together this good description.  I've been 
meaning to rewrite part of LocalMetadataStore for the reason you outline here.  
(Tests pass because clients fall back to the backing store when 
get(PathMetadata) returns null.  Also getFileStatus() calls and file creations 
cause much of the PathMetadata to be recorded.)

Two issues here

(1) LocalMetadataStore implementation
(2) Design of Interface: Is DirListingMetadata required?

#1. I need to rework the datastructures here.  Keeping two copies of each 
FileStatus is silly.  "two hashtables" was a quick prototype that needs to be 
replaced.  Callers of MetadataStore interface do not have to do separate put() 
for each child in a directory; those FileStatuses were included in the 
put(DirListingMetadata).

#2 Do we need the "batched" API of put(DirListingMetadata)?  Here was the 
thought process so far:

You can think of DirListingMetadata as "results of listStatus() plus an 
authoritative bit".

I thought about removing DirListingMetadata and just doing put()/get() on 
PathMetadata for each directory entry.  Then we need a separate 
setAuthoritative(path, boolean) function.  Does this open up new race 
conditions?

If Client A is putting the results of a listStatus() into MetadataStore, one by 
one, then calling setAuthoritative(parent), while Client B is putting or 
deleting entries into the same directory, maybe there is no race there.  Maybe 
we think of your proposed setAuthoritative(path, boolean) function as a marker 
in time, after which, the MetadataStore knows the full contents of the 
directory, instead of put(DirListingMeta, authoritative=true) as "this is the 
current snapshot of the full directory contents".

If we are implementing directory-level cache invalidation (probably necessary 
for S3AFileStatus#isEmptyDirectory(), and maybe as CLI operation), it could be 
a little tricky.  If Client A is doing its sequence {set(child_meta_1), 
set(child_meta_2), ..., setAuthoritative(parent_path, true)} and Client B needs 
to invalidate the parent directory in the middle of that stream, I'm not sure 
how that would work.  The DirListingMetadata approach at least makes it 
possible for implementations to handle it, even though many (dynamoDB) will 
likely not handle that case.

For #1, I will fix the LocalMetadataStore and add tests to catch this sort of 
case.

For #2, I'd prefer to keep this interface until we get the major patches merged 
(HADOOP-13631, HADOOP-13651, and HADOOP-13449) and then do a followup JIRA for 
any interface changes.  I'm open to suggestions though, what do you think?

> LocalMetadataStore#put(DirListingMetadata) should also put file metadata into 
> fileHash.
> ---
>
> Key: HADOOP-13756
> URL: https://issues.apache.org/jira/browse/HADOOP-13756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>
> {{LocalMetadataStore#put(DirListingMetadata)}} only puts the metadata into 
> {{dirHash}}, thus all {{FileStatus}} s are missing from 
> {{LocalMedataStore#fileHash()}}, which makes it confuse to use.
> So in the current way, to correctly put file status into the store (and also 
> set {{authoriative}} flag), you need to run  {code}
> List metas = new ArrayList();
> boolean authorizative = true;
> for (S3AFileStatus status : files) {
>PathMetadata meta = new PathMetadata(status);
>store.put(meta);
> }
> DirListingMetadata dirMeta = new DirMeta(parent, metas, authorizative);
> store.put(dirMeta);
> {code}
> Since solely calling {{store.put(dirMeta)}} is not correct, and calling 
> {{store.put(dirMeta);}} after putting all sub-file {{FileStatuss}} does the 
> repetitive jobs. Can we just use a {{put(PathMetadata)}} and a 
> {{get/setAuthorative()}}   in the MetadataStore interface instead?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603799#comment-15603799
 ] 

Andrew Wang commented on HADOOP-13696:
--

[~ste...@apache.org] could you comment on my previous? Again, this is weird 
behavior compared to the other filesystems in hadoop-common.

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13757) Remove verifyBuckets overhead in S3AFileSystem::initialize()

2016-10-24 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13757:
-

 Summary: Remove verifyBuckets overhead in 
S3AFileSystem::initialize()
 Key: HADOOP-13757
 URL: https://issues.apache.org/jira/browse/HADOOP-13757
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


{{S3AFileSystem.initialize()}} invokes verifyBuckets, but in case the bucket 
does not exist and gets a 403 error message, it ends up returning {{true}} for 
{{s3.doesBucketExists(bucketName}}.  In that aspect,  verifyBuckets() is an 
unnecessary call during initialization. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603721#comment-15603721
 ] 

Xiao Chen commented on HADOOP-13669:


Hi [~aw] [~brahmareddy] [~jojochuang],
Could you take a look at addendum2 and see if it makes sense? I'd like to have 
trunk fixed soon. :)

Thanks.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message

2016-10-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603718#comment-15603718
 ] 

Xiao Chen commented on HADOOP-13720:


Thanks [~yzhangal] for the patch.
Looking at the class, I think we should also improve message for a similar case.
{code}
   if (id.getMaxDate() < now) {
  throw new InvalidToken(renewer + " tried to renew an expired token");
}
{code}

What do you think?

> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603716#comment-15603716
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 75 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client . 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 24 new + 2505 
unchanged - 71 fixed = 2529 total (was 2576) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
33s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client . 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-maven-plugins generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  6s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 15m  
7s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}257m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-maven-plugins |
|  |  Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.maven.plugin.resourcegz.ResourceGzMojo$GZConsumer.accept(Path)
  At ResourceGzMojo.java:ignored in 
org.apache.hadoop.maven.plugin.resourcegz.ResourceGzMojo$GZConsumer.accept(Path)
  At ResourceGzMojo.java:[line 105] |
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | 

[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message

2016-10-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13720:
---
Description: 
Currently AbstractDelegationTokenSecretManager$checkToken does

{code}
  protected DelegationTokenInformation checkToken(TokenIdent identifier)
  throws InvalidToken {
assert Thread.holdsLock(this);
DelegationTokenInformation info = getTokenInfo(identifier);
if (info == null) {
  throw new InvalidToken("token (" + identifier.toString()
  + ") can't be found in cache");
}
if (info.getRenewDate() < Time.now()) {
  throw new InvalidToken("token (" + identifier.toString() + ") is 
expired");
}
return info;
  } 
{code}

When a token is expried, we throw the above exception without printing out the 
{{info.getRenewDate()}} in the message. If we print it out, we could know for 
how long the token has not been renewed. This will help us investigate certain 
issues.

Create this jira as a request to add that part.



  was:
Currently AbstractDelegationTokenSecretM anager$checkToken does

{code}
  protected DelegationTokenInformation checkToken(TokenIdent identifier)
  throws InvalidToken {
assert Thread.holdsLock(this);
DelegationTokenInformation info = getTokenInfo(identifier);
if (info == null) {
  throw new InvalidToken("token (" + identifier.toString()
  + ") can't be found in cache");
}
if (info.getRenewDate() < Time.now()) {
  throw new InvalidToken("token (" + identifier.toString() + ") is 
expired");
}
return info;
  } 
{code}

When a token is expried, we throw the above exception without printing out the 
{{info.getRenewDate()}} in the message. If we print it out, we could know for 
how long the token has not been renewed. This will help us investigate certain 
issues.

Create this jira as a request to add that part.




> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-24 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13055:

Attachment: HADOOP-13055.04.patch

Attaching v04 patch after rebase.

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13747) Use LongAdder for more efficient metrics tracking

2016-10-24 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-13747:
-
Attachment: HADOOP-13747.patch
benchmark_results

As discussed in the comments of HDFS-10872, doing aggregation-on-read would be 
ideal. I'm attaching some relevant benchmark numbers supporting this. The 
{{MetricBenchmark.increment}} numbers show the overhead of contesting on a 
single metric (as discussed further in HDFS-10872, though this also includes an 
implementation which uses an {{AtomicLong}} for updating). The 
{{MetricGroupBenchmark.increment}} numbers show the overhead of having 50 
different metrics with 100 threads trying to update them all as fast as 
possible (each individual metric is stored as an {{AtomicReferenceAdder}} from 
the upper benchmarks). {{Synchronized}} is essentially the current 
implementation with {{synchronized}} methods; {{RegularHashMap}} stores the 
metrics in a {{HashMap}} which does not provide any synchronization but serves 
as a baseline; {{ConcurrentHashMapComputeIfAbsent}} which uses the 
{{computeIfAbsent}} method to insert a new metric if necessary; 
{{HashMapRWLock}} which wraps a {{HashMap}} in a {{ReadWriteLock}}; and 
{{LocalUpdateAggregateWeakRef}} in which each thread stores a local copy of the 
metrics and, on snapshot, aggregates them all. This is achieved by keeping one 
{{ConcurrentLinkedDequeue}} of {{WeakReference}} to a map of metrics stored in 
a {{ThreadLocal}}. Each thread has its local map which it inserts into the 
{{ConcurrentLinkedDequeue}}, and upon snapshot the snapshotting thread 
traverses this queue to read metrics from each thread's map. If the 
{{WeakReference}} is unresolvable (i.e., the thread has died), the snapshotting 
thread will remove the reference from the queue.

I'm attaching a patch implementing the {{LocalUpdateAggregateWeakRef}} but not 
marking patch available. I just want to put up the ideas in code for 
discussion; it needs a little cleaning up with naming and such before it's 
ready to go.

> Use LongAdder for more efficient metrics tracking
> -
>
> Key: HADOOP-13747
> URL: https://issues.apache.org/jira/browse/HADOOP-13747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
> Attachments: HADOOP-13747.patch, benchmark_results
>
>
> Currently many metrics, including {{RpcMetrics}} and {{RpcDetailedMetrics}}, 
> use a synchronized counter to be updated by all handler threads (multiple 
> hundreds in large production clusters). As [~andrew.wang] suggested, it'd be 
> more efficient to use the [LongAdder | 
> http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?view=co]
>  library which dynamically create intermediate-result variables.
> Assigning to [~xkrogen] who has already done some investigation on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-24 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603659#comment-15603659
 ] 

Robert Kanter commented on HADOOP-10075:


I should have hopefully fixed the remaining test failures other than 
{{TestQueuingContainerManager}} (see YARN-5377).  

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603643#comment-15603643
 ] 

Hadoop QA commented on HADOOP-13738:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 30 unchanged - 2 fixed = 37 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Bad attempt to compute absolute value of signed random integer in 
org.apache.hadoop.util.DiskChecker.makeRandomFile(File)  At 
DiskChecker.java:value of signed random integer in 
org.apache.hadoop.util.DiskChecker.makeRandomFile(File)  At 
DiskChecker.java:[line 258] |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835033/HADOOP-13738.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8156dfa8c3d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9d17585 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10881/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10881/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 

[jira] [Created] (HADOOP-13756) LocalMetadataStore#put(DirListingMetadata) should also put file metadata into fileHash.

2016-10-24 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-13756:
--

 Summary: LocalMetadataStore#put(DirListingMetadata) should also 
put file metadata into fileHash.
 Key: HADOOP-13756
 URL: https://issues.apache.org/jira/browse/HADOOP-13756
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Lei (Eddy) Xu


{{LocalMetadataStore#put(DirListingMetadata)}} only puts the metadata into 
{{dirHash}}, thus all {{FileStatus}} s are missing from 
{{LocalMedataStore#fileHash()}}, which makes it confuse to use.

So in the current way, to correctly put file status into the store (and also 
set {{authoriative}} flag), you need to run  {code}

List metas = new ArrayList();
boolean authorizative = true;
for (S3AFileStatus status : files) {
   PathMetadata meta = new PathMetadata(status);
   store.put(meta);
}
DirListingMetadata dirMeta = new DirMeta(parent, metas, authorizative);
store.put(dirMeta);
{code}

Since solely calling {{store.put(dirMeta)}} is not correct, and calling 
{{store.put(dirMeta);}} after putting all sub-file {{FileStatuss}} does the 
repetitive jobs. Can we just use a {{put(PathMetadata)}} and a 
{{get/setAuthorative()}}   in the MetadataStore interface instead?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Attachment: HADOOP-13727-branch-2.005.patch

I'm attaching patch revision 005, rebasing to current.

bq. it passing a full test run against your endpoint

The rebased patch applied to branch-2 passes a full test run against us-west-2.

bq. you doing a look at the generated aws site page (or a github/markdown 
editor) to make sure the 1. 2 ... bullet points are pulled out. I think you may 
need an extra line.

Everything looks to be rendering nicely.  These are actually part of Markdown 
code blocks that repeat snippets of core-site.xml.  Thanks for being watchful 
of the documentation though.

bq. Ideally it'd be good to do the whole test suite to verify that IAM works 
across the lot...

For this last test run, I relied on instance profile credentials for all of the 
S3A tests.  FWIW, I now use instance profile credentials pretty regularly for 
my test runs of all patches I review.

With the rebase and addressing the above, all feedback has been resolved.  I 
plan to commit based on Steve's +1 after a fresh pre-commit run.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch, HADOOP-13727-branch-2.005.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13738) DiskChecker should perform some disk IO

2016-10-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13738:
---
Attachment: HADOOP-13738.03.patch

v03. Fix a bad reference in javadocs.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch, 
> HADOOP-13738.03.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13738) DiskChecker should perform some disk IO

2016-10-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13738:
---
Attachment: HADOOP-13738.02.patch

v02 patch rebased to trunk. Also made the {{doDiskIo}} method slightly more 
conservative (it retries up to three times on any IOException not just FNFE). 
Added two more tests for the changed behavior.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-10-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603414#comment-15603414
 ] 

Aaron Fabbri commented on HADOOP-13631:
---

[~cnauroth] HADOOP-13651 is based on this patch, and I think that this move() 
interface works well in practice.  Checkout my latest patch there for details.

The batch interface used here doesn't appear to be too onerous, and if that 
were the issue in the future, it would be easy to put a single-path move() 
wrapper around the batched move() interface used here.



> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13631-HADOOP-13345.001.patch
>
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603396#comment-15603396
 ] 

Mingliang Liu commented on HADOOP-13449:


Thanks! I'll review that patch this week.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603395#comment-15603395
 ] 

Ravi Prakash commented on HADOOP-10075:
---

Thanks Robert for your work. I'm trying to run all the unit tests which on the 
computer that I can spare takes 10 hours. If the jenkins bot came back with a 
+1, it'd be much easier for me to +1 the patch too.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603380#comment-15603380
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

Heads-up I just posted my latest patch to HADOOP-13651.  Integration with 
S3AFileSystem is looking pretty good.  I did make some changes to the way Paths 
are handled.  I have two patches outstanding that could go in: HADOOP-13631 
(move implementation) and HADOOP-13651 (S3AFileSystem integration).  We may 
want to start reviewing and committing these to avoid merge hell when this one 
gets done.

The work in my latest patch should make your life easier on this JIRA when you 
get to running all the S3A integration tests.  I'm available to help with that, 
i.e. if you want some test refactoring to make the MetadataStore contract tests 
easier to apply to S3AFileStatus (where not all FileSystem fields need to be 
persisted), just let me know.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13754) Hadoop-Azure Update WASB URI format to support SAS token in it.

2016-10-24 Thread Sumit Dubey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603339#comment-15603339
 ] 

Sumit Dubey commented on HADOOP-13754:
--

Steve, Thanks for going over this. Our use case is more scoped than the s3a 
case you mentioned. 
We are trying to use encoded SAS token 
(https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
 ) which are time-limited, scope-limited, access-limited(generally read-only!). 
The exposure resulting from SAS token is much more limited that say a s3 bucket 
key which I believe is a potentially time-unbounded secret key for all the 
files inside s3 bucket.

> Hadoop-Azure Update WASB URI format to support SAS token in it.
> ---
>
> Key: HADOOP-13754
> URL: https://issues.apache.org/jira/browse/HADOOP-13754
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sumit Dubey
> Fix For: 2.7.3
>
> Attachments: HADOOP-13754-branch-2.7.3.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Currently Azure WASB adapter code supports wasb url in this format 
> wasb://[containername@]youraccount.blob.core.windows.net/testDir with the 
> credentials retrieved from configuration and scoped to a container.
> With this change we want 
> 1) change the url to contain file level sas token in the url
> wasb://[containername[:]]@youraccount.blob.core.windows.net/testDir
> 2) Scope access to a blob/file level.
> 3) Tests to test the new url format



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-10-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603330#comment-15603330
 ] 

Chris Nauroth edited comment on HADOOP-13586 at 10/24/16 9:51 PM:
--

Steve, do you still have a repro for this?  I just completed a full build of 
current trunk on Windows successfully.

HADOOP-13149 was a similar bug report that I fixed, but that's fairly old.  Is 
there any chance you were running without this patch?


was (Author: cnauroth):
Steve, do you still have a repro for this?  I just completely a full build of 
current trunk on Windows successfully.

HADOOP-13149 was a similar bug report that I fixed, but that's fairly old.  Is 
there any chance you were running without this patch?

> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-10-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603330#comment-15603330
 ] 

Chris Nauroth commented on HADOOP-13586:


Steve, do you still have a repro for this?  I just completely a full build of 
current trunk on Windows successfully.

HADOOP-13149 was a similar bug report that I fixed, but that's fairly old.  Is 
there any chance you were running without this patch?

> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603319#comment-15603319
 ] 

Wei-Chiu Chuang edited comment on HADOOP-11798 at 10/24/16 9:47 PM:


+1. I'll postpone until end of tomorrow to allow any watchers to comment.


was (Author: jojochuang):
+1 Committing patch v5.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch, HADOOP-11798-v5.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603319#comment-15603319
 ] 

Wei-Chiu Chuang commented on HADOOP-11798:
--

+1 Committing patch v5.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch, HADOOP-11798-v5.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder

2016-10-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603307#comment-15603307
 ] 

Wei-Chiu Chuang commented on HADOOP-13665:
--

[~lewuathe] appreciate if you can work on this when you get a chance. Thanks!

> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
>
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-24 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13651:
--
Attachment: HADOOP-13651-HADOOP-13345.002.patch

Attaching v2 patch.   We should consider merging this soon as it makes a lot of 
changes in the MetadataStore test code around the new requirement for 
fully-qualified paths.

 Implements keeping {{S3AFileStatus#isEmptyDirectory()}} updated.  Also hardens 
treatment of Path objects, so we are consistently dealing with fully qualified 
paths.

All unit and integration tests pass as is.

If you enable LocalMetadataStore with authoritative = true (See commented-out 
core-site.xml changes in patch), all tests pass except I see occasional 
failures in {{ 
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing}}.
  I'm still working on that one.

If you enable LocalMetadataStore with authoritative = false, almost all tests 
are passing as well, but I think I need to fix a FS statistics delta test that 
is off-by-one still.



> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603273#comment-15603273
 ] 

Arpit Agarwal commented on HADOOP-13737:


Thank you for reviewing and committing this [~anu]!

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch, HADOOP-13737.02.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603229#comment-15603229
 ] 

Mingliang Liu edited comment on HADOOP-13449 at 10/24/16 9:14 PM:
--

{quote}
I feel that we should have a consistent contract for all the stores.
{quote}
Yes we should fix this.

{quote}
Regarding isEmptyDirectory. should we store it in dynamodb as well?
{quote}
I plan to store this in the v1 patch. We can optimize this in the future if too 
many requests to DynamoDB.

{quote}
DynamoDBMetadataStore should have a getTableName() or getTable().
{quote}
That makes perfect sense as well. Perhaps a {{Table}} is better so that we can 
(will we?) operate it directly (query/scan etc) in the CLI tool?

{quote}
For authoritative(), do you think storing it as a flag in the DynamoDB is a 
good idea?
{quote}
That's a good suggestion. I can try this.


was (Author: liuml07):
{quote}
I feel that we should have a consistent contract for all the stores.
{quote}
Yes we should fix this.

{quote}
Regarding isEmptyDirectory. should we store it in dynamodb as well?
{quote}
I plan to store this in the v1 patch. We can optimize this in the future if too 
many requests to DynamoDB.

{quote}
DynamoDBMetadataStore should have a getTableName() or getTable().
{quote}
That makes perfect sense as well.

{quote}
For authoritative(), do you think storing it as a flag in the DynamoDB is a 
good idea?
{quote}
That's a good suggestion. I can try this.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603231#comment-15603231
 ] 

Mingliang Liu commented on HADOOP-13449:


Thansk for the review, [~eddyxu]. Will post new patches addressing these.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603229#comment-15603229
 ] 

Mingliang Liu commented on HADOOP-13449:


{quote}
I feel that we should have a consistent contract for all the stores.
{quote}
Yes we should fix this.

{quote}
Regarding isEmptyDirectory. should we store it in dynamodb as well?
{quote}
I plan to store this in the v1 patch. We can optimize this in the future if too 
many requests to DynamoDB.

{quote}
DynamoDBMetadataStore should have a getTableName() or getTable().
{quote}
That makes perfect sense as well.

{quote}
For authoritative(), do you think storing it as a flag in the DynamoDB is a 
good idea?
{quote}
That's a good suggestion. I can try this.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603207#comment-15603207
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Hi, [~liuml07]

The patch looks good to me overall.  Looking forward to fill the gaps of tests. 

{code:title=PathMetadataToDynamoDBTranslation.java}
final FileStatus fileStatus = isDir ? new S3AFileStatus(true, false, path) : 
new S3AFileStatus(0, 0, path, 0);
{code}

Here, it seems that it only has the path be correctly populated. It assumes 
that {{S3AFileSystem}} only checks the existence of file in {{MS}}.  It is 
different to {{InMemoryMetadataStore}}. I feel that we should have a consistent 
contract for all the stores.

* Regarding {{isEmptyDirectory}}. should we store it in dynamodb as well? The 
drawback is that we should update this field in DynamoDB in 
{{S3AFileStatus#finishWrite}} for every file. 

* {{DynamoDBMetadataStore}} should have a {{getTableName()}} or {{getTable()}}. 
The table name is parsed within {{initialize()}}, so from the caller (i.e., CLI 
tool) point of view, it is difficult to get the table name to call 
{{deleteTable(String tableName);}}. 


* For {{authoritative()}}, do you think storing it as a flag in the DynamoDB is 
a good idea?


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603190#comment-15603190
 ] 

Hadoop QA commented on HADOOP-13755:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 28s{color} | {color:orange} root: The patch generated 10 new + 53 unchanged 
- 1 fixed = 63 total (was 54) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2016-10-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603185#comment-15603185
 ] 

Wei-Chiu Chuang commented on HADOOP-13715:
--

To follow up -- it looks like we would want FileStatus to be extensible, to 
accommodate new flags proposed by [~drankye] and [~rakesh_r]. I am +1 for 
[~steve_l]'s proposal to add a bitfield (or EnumSet?) so that we do not need to 
worry about breaking API compatibility in the near future.

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-10-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603078#comment-15603078
 ] 

Andrew Wang commented on HADOOP-13586:
--

Maybe this JIRA isn't getting enough attention, so it's worth asking for help 
on common-dev.

If no help emerges though, I don't think holding the release (particularly an 
alpha) will change that. This cross-platform stuff really depends on help from 
interested contributors.

> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2016-10-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603038#comment-15603038
 ] 

Hudson commented on HADOOP-13626:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10666 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10666/])
HADOOP-13626. Remove distcp dependency on FileStatus serialization (cdouglas: 
rev a1a0281e12ea96476e75b076f76d5b5eb5254eea)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListingFileStatus.java
* (add) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListingFileStatus.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestRetriableFileCopyCommand.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-10-24 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Attachment: HADOOP-10075.011.patch

The 011 patch:
- Removes trailing whitespace from js files
- Replaces tabs with spaces from js files
- Fixed broken unit tests in {{TestTimelineReaderWebServices}} that came in 
during an earlier rebase
- rebased on latest trunk


> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13754) Hadoop-Azure Update WASB URI format to support SAS token in it.

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15603015#comment-15603015
 ] 

Steve Loughran commented on HADOOP-13754:
-

I'm going to need a very good justification for this. Because we can't stop 
those URIs leaking in the logs everywhere, and those tokens are the secrets 
needed to gain access to it. 

We recently went to some effort in HADOOP-3733 to try and stop this in s3a, by 
stripping them out early, not using them in equivalence tests, etc —but still 
they end up throughout the logs. Which is why it's something that s3a now warns 
"may be removed in future"

Assuming you are doing it to do cross-credential bucket access, azure and s3a 
Hadoop both need a better way of doing this than embedding secrets into paths 
which get everywhere

> Hadoop-Azure Update WASB URI format to support SAS token in it.
> ---
>
> Key: HADOOP-13754
> URL: https://issues.apache.org/jira/browse/HADOOP-13754
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sumit Dubey
> Fix For: 2.7.3
>
> Attachments: HADOOP-13754-branch-2.7.3.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Currently Azure WASB adapter code supports wasb url in this format 
> wasb://[containername@]youraccount.blob.core.windows.net/testDir with the 
> credentials retrieved from configuration and scoped to a container.
> With this change we want 
> 1) change the url to contain file level sas token in the url
> wasb://[containername[:]]@youraccount.blob.core.windows.net/testDir
> 2) Scope access to a blob/file level.
> 3) Tests to test the new url format



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2016-10-24 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13626:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

I committed this.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13753) Hadoop-Azure Update WASB URI format to support SAS token in it.

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13753.
-
Resolution: Invalid

> Hadoop-Azure Update WASB URI format to support SAS token in it.
> ---
>
> Key: HADOOP-13753
> URL: https://issues.apache.org/jira/browse/HADOOP-13753
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Sumit Dubey
>Priority: Minor
>
> The filesystem spec behaviour of {{getFileBlockLocations(Path)}} and its base 
> implementation raise an FNFE if the path isn't there. But the method's 
> javadocs say " For a nonexistent  file or regions, null will be returned."
> * look at HDFS to see what it does
> * make spec and docs consistent with that
> * add contract tests
> I actually think HDFS raises FNFE, so it's the javadocs that are wrong. We 
> just need a contract test & fixed up javadocs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602990#comment-15602990
 ] 

Steve Loughran commented on HADOOP-13586:
-

issue is that irrespective of server-side use, this is needed client side to 
run spark standalone, which creates problems when it stops working. That's why 
build and release Hadoop binaries for 2.x: 
https://github.com/steveloughran/winutils

I do think the release should look at windows. If people think otherwise, well, 
that's something that can be raised on the dev lists. Maybe also cc things like 
the spark-dev, "OK if we drop client side and standalone code working on 
windows". 



> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602989#comment-15602989
 ] 

Hadoop QA commented on HADOOP-13626:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 36 unchanged - 3 fixed = 36 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools_hadoop-distcp generated 0 new + 49 
unchanged - 1 fixed = 49 total (was 50) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
42s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13626 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834997/HADOOP-13626.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8277f3b98167 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b18f35f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10878/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10878/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> 

[jira] [Commented] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602985#comment-15602985
 ] 

Hadoop QA commented on HADOOP-13605:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 47s{color} 
| {color:red} root-jdk1.7.0_111 with JDK v1.7.0_111 generated 1 new + 949 
unchanged - 1 fixed = 950 total (was 950) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 67 unchanged - 77 fixed = 74 total (was 144) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13605 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834990/HADOOP-13605-branch-2-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 00e749ca5ee1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 

[jira] [Commented] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602957#comment-15602957
 ] 

Hadoop QA commented on HADOOP-13017:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
59s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834982/HADOOP-13017-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d46896bbdb93 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b18f35f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10876/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Updated] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2016-10-24 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13626:
---
Attachment: HADOOP-13626.004.patch

Thanks, [~liuml07]. Integrated all your review comments in v004, will commit if 
Jenkins comes back clean.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602919#comment-15602919
 ] 

Steve Loughran commented on HADOOP-13755:
-

patch 008; tested s3 ireland

> Purge superfluous/obsolete S3A Tests
> 
>
> Key: HADOOP-13755
> URL: https://issues.apache.org/jira/browse/HADOOP-13755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-008.patch
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.
>  
> Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
> tests that are slowing test runs down*
> Things got confused there about patches vs PRs; closing that and having this 
> patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13755:

Attachment: HADOOP-13614-branch-2-008.patch

> Purge superfluous/obsolete S3A Tests
> 
>
> Key: HADOOP-13755
> URL: https://issues.apache.org/jira/browse/HADOOP-13755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-008.patch
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.
>  
> Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
> tests that are slowing test runs down*
> Things got confused there about patches vs PRs; closing that and having this 
> patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13755:

Status: Patch Available  (was: Open)

> Purge superfluous/obsolete S3A Tests
> 
>
> Key: HADOOP-13755
> URL: https://issues.apache.org/jira/browse/HADOOP-13755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-008.patch
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.
>  
> Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
> tests that are slowing test runs down*
> Things got confused there about patches vs PRs; closing that and having this 
> patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602860#comment-15602860
 ] 

Hadoop QA commented on HADOOP-13614:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
43s{color} | {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 43s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
34s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 34s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 29s{color} | {color:orange} root: The patch generated 10 new + 53 unchanged 
- 1 fixed = 63 total (was 54) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-aws in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 20s{color} | 

[jira] [Commented] (HADOOP-13586) Hadoop 3.0 build broken on windows

2016-10-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602820#comment-15602820
 ] 

Andrew Wang commented on HADOOP-13586:
--

Continued Windows support depends on Windows-interested contributors; is there 
someone willing to take this? Else, I don't want to hold the release for this 
JIRA.

> Hadoop 3.0 build broken on windows
> --
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>Priority: Blocker
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13660) Upgrade commons-configuration version

2016-10-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602819#comment-15602819
 ] 

Wei-Chiu Chuang commented on HADOOP-13660:
--

Hey [~mackrorysd] thanks a lot for putting these together.  It mostly looks 
good with just a few nits:
* IOException is imported but not used in MetricsConfig.java
* 
{code:title=MetricsConfig#create}
// Commons Configuration defines the message text when file not found
if (e.getMessage().startsWith("Could not locate")) {
  continue;
}
{code}
Depending on exception message is generally risky. Would it be possible to add 
a regression test? Or is it already covered by existing unit tests?
* This conversion is a little bit concerning:
{code}
conf.setListDelimiterHandler(new DefaultListDelimiterHandler(','));
{code}
 According to 
https://commons.apache.org/proper/commons-configuration/userguide/upgradeto2_0.html#Accessing_Configuration_Properties
 {{DefaultListDelimiterHandler}} is not 100% compatible with the old behavior 
(even though the old one is inconsistent). Instead, 
{{LegacyListDelimiterHandler}} is said to preserve old behavior. Want to call 
this out so we can study it further.


> Upgrade commons-configuration version
> -
>
> Key: HADOOP-13660
> URL: https://issues.apache.org/jira/browse/HADOOP-13660
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13660-configuration2.001.patch, 
> HADOOP-13660.001.patch, HADOOP-13660.002.patch, HADOOP-13660.003.patch
>
>
> We're currently pulling in version 1.6 - I think we should upgrade to the 
> latest 1.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602790#comment-15602790
 ] 

Steve Loughran commented on HADOOP-13502:
-

LGTM.

+1 —if you do a runthrough applied to the latest code before you check it in


> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602792#comment-15602792
 ] 

Steve Loughran commented on HADOOP-13687:
-

Still +1 on this...


> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12774:

Status: Patch Available  (was: Open)

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Status: Patch Available  (was: Open)

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch, 
> HADOOP-13605-branch-2-002.patch, HADOOP-13605-branch-2-003.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Attachment: HADOOP-13605-branch-2-003.patch

patch 003; in sync with current codebase

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch, 
> HADOOP-13605-branch-2-002.patch, HADOOP-13605-branch-2-003.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Status: Open  (was: Patch Available)

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch, 
> HADOOP-13605-branch-2-002.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602746#comment-15602746
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


bq. . I'm wondering whether we should also save access time, owner, group, 
permission, size etc in the metadata,

Yes, in theory, we can save them. However, for S3AFileStatus, these fields are 
not set, while the tests are not test the special cases of S3AFIleStatus (i.e., 
whether {{isEmptyDirectory() == true && DirMetadata.isEmpty()}} ).   IMO, we 
should modify the tests for these invariants. 


I am working on reviewing the rest of the patch. Will post a review soon. 
Thanks for the good work.




> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-24 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602745#comment-15602745
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


bq. . I'm wondering whether we should also save access time, owner, group, 
permission, size etc in the metadata,

Yes, in theory, we can save them. However, for S3AFileStatus, these fields are 
not set, while the tests are not test the special cases of S3AFIleStatus (i.e., 
whether {{isEmptyDirectory() == true && DirMetadata.isEmpty()}} ).   IMO, we 
should modify the tests for these invariants. 


I am working on reviewing the rest of the patch. Will post a review soon. 
Thanks for the good work.




> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Attachment: HADOOP-13017-002.patch

Patch 002; adds Har and WebHdfs streams; cuts s3InputStream out

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Status: Patch Available  (was: Open)

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Status: Open  (was: Patch Available)

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13248) Pre-Commit Docker Image build failures

2016-10-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13248.
---
Resolution: Cannot Reproduce

> Pre-Commit Docker Image build failures
> --
>
> Key: HADOOP-13248
> URL: https://issues.apache.org/jira/browse/HADOOP-13248
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anu Engineer
>Priority: Critical
>
> Following emails threads from dev mailing lists explains the problem.
> I'm curious if anyone has noticed issues with pre-commit failing during the 
> Docker image build on Jenkins node H2.  Here are a few examples.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/9661/console
> https://builds.apache.org/job/PreCommit-HADOOP-Build/9662/console
> https://builds.apache.org/job/PreCommit-HADOOP-Build/9670/console
> These are all test runs for the same patch, but the patch just removes 5 
> lines of Java code, so I don't expect the particular patch could cause a 
> failure like this.  I noticed that they all ran on H2.  It seems to be a 
> problem installing oracle-java8-installer:
> WARNING: The following packages cannot be authenticated!
>   oracle-java8-installer
> ?[91mE: There are problems and -y was used without --force-yes
> ?[0mThe command '/bin/sh -c apt-get -q install --no-install-recommends -y 
> oracle-java8-installer' returned a non-zero code: 100
> --Chris Nauroth
> -
> Thanks for bringing this up. I just ran into the same issue.
> https://builds.apache.org/job/PreCommit-HDFS-Build/15700/console
> But in my case it seems like a different host.  “Building remotely on H0”.
> --Anu
> --
> Another instance of the same failure.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/9690/console
> I am going to open a JIRA so that we can track this issue there. This is on 
> H1 so I don’t think it is machine specific.
> --Anu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13716.
-
Resolution: Fixed

> Add LambdaTestUtils class for tests; fix eventual consistency problem in 
> contract test setup
> 
>
> Key: HADOOP-13716
> URL: https://issues.apache.org/jira/browse/HADOOP-13716
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, 
> HADOOP-13716-003.patch, HADOOP-13716-005.patch, HADOOP-13716-006.patch, 
> HADOOP-13716-branch-2-004.patch
>
>
> To make our tests robust against timing problems and eventual consistent 
> stores, we need to do more spin & wait for state.
> We have some code in {{GenericTestUtils.waitFor}} to await a condition being 
> met, but the predicate it calls doesn't throw exceptions, there's no way for 
> a probe to throw an exception, and all you get is the eventual "timed out" 
> message. 
> We can do better, and in closure-ready languages (scala & scalatest, groovy 
> and some slider code) we've examples to follow. Some of that work has been 
> reimplemented slightly in {{S3ATestUtils.eventually}}
> I propose adding a class in the test tree, {{Eventually}} to be a 
> successor/replacement for these.
> # has an eventually/waitfor operation taking a predicate that throws an 
> exception
> # has an "evaluate" exception which tries to evaluate an answer until the 
> operation stops raising an exception. (again, from scalatest)
> # plugin backoff strategies (from Scalatest; lets you do exponential as well 
> as linear)
> # option of adding a special handler to generate the failure exception (e.g. 
> run more detailed diagnostics for the exception text, etc).
> # be Java 8 lambda expression friendly
> # be testable and tested itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602612#comment-15602612
 ] 

Hadoop QA commented on HADOOP-13166:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13166 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804426/HADOOP-13166-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 055047cec36a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b18f35f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10874/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10874/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> 

[jira] [Updated] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13716:

Status: Open  (was: Patch Available)

> Add LambdaTestUtils class for tests; fix eventual consistency problem in 
> contract test setup
> 
>
> Key: HADOOP-13716
> URL: https://issues.apache.org/jira/browse/HADOOP-13716
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, 
> HADOOP-13716-003.patch, HADOOP-13716-005.patch, HADOOP-13716-006.patch, 
> HADOOP-13716-branch-2-004.patch
>
>
> To make our tests robust against timing problems and eventual consistent 
> stores, we need to do more spin & wait for state.
> We have some code in {{GenericTestUtils.waitFor}} to await a condition being 
> met, but the predicate it calls doesn't throw exceptions, there's no way for 
> a probe to throw an exception, and all you get is the eventual "timed out" 
> message. 
> We can do better, and in closure-ready languages (scala & scalatest, groovy 
> and some slider code) we've examples to follow. Some of that work has been 
> reimplemented slightly in {{S3ATestUtils.eventually}}
> I propose adding a class in the test tree, {{Eventually}} to be a 
> successor/replacement for these.
> # has an eventually/waitfor operation taking a predicate that throws an 
> exception
> # has an "evaluate" exception which tries to evaluate an answer until the 
> operation stops raising an exception. (again, from scalatest)
> # plugin backoff strategies (from Scalatest; lets you do exponential as well 
> as linear)
> # option of adding a special handler to generate the failure exception (e.g. 
> run more detailed diagnostics for the exception text, etc).
> # be Java 8 lambda expression friendly
> # be testable and tested itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602597#comment-15602597
 ] 

Steve Loughran commented on HADOOP-12774:
-

submitting rebased on branch-2

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602593#comment-15602593
 ] 

Steve Loughran commented on HADOOP-13614:
-

HADOOP-13755 replaces this with a -patch rather than PR submission.


> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13755:

Summary: Purge superfluous/obsolete S3A Tests  (was: Purge 
superfluous/obsolete S3A FS)

> Purge superfluous/obsolete S3A Tests
> 
>
> Key: HADOOP-13755
> URL: https://issues.apache.org/jira/browse/HADOOP-13755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.
>  
> Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
> tests that are slowing test runs down*
> Things got confused there about patches vs PRs; closing that and having this 
> patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Open  (was: Patch Available)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13204) Über-jira: S3a phase III: scale and tuning

2016-10-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13204:

Attachment: HADOOP-13614-branch-2-008.patch

Patch 008; in sync with branch-2

> Über-jira: S3a phase III: scale and tuning
> --
>
> Key: HADOOP-13204
> URL: https://issues.apache.org/jira/browse/HADOOP-13204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13614-branch-2-008.patch
>
>
> S3A Phase III work; post 2.8. 
> Areas could include
> * customisation
> * performance enhancement
> * management and metrics
> * error/failure handling
> And of course any bugs which surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13755) Purge superfluous/obsolete S3A FS

2016-10-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13755:
---

 Summary: Purge superfluous/obsolete S3A FS
 Key: HADOOP-13755
 URL: https://issues.apache.org/jira/browse/HADOOP-13755
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


Some of the slow test cases contain tests that are now obsoleted by newer ones. 
For example, {{ITestS3ADeleteManyFiles}} has the test case {{testOpenCreate()}} 
which writes then reads files up 25 MB.

Have a look at which of the s3a tests are taking time, review them to see if 
newer tests have superceded the slow ones; and cut them where appropriate.
 

Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
tests that are slowing test runs down*

Things got confused there about patches vs PRs; closing that and having this 
patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602574#comment-15602574
 ] 

Xiao Chen commented on HADOOP-13669:


Thanks for the quick response [~aw]! (And sorry for not being clear above)

I believe what you described is addendum 2: trunk has 2 existing, patch fixed 
the 2. So as soon as addendum 2 is reviewed, we can commit... right?

My last comment was for a postmodern on the previously-committed addendum 1. It 
says trunk has 2 existing, patch passed (but without the explicit (0-2=0) 
message), so I misunderstood it and thought it fixed the issue my bad.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602529#comment-15602529
 ] 

Allen Wittenauer commented on HADOOP-13669:
---

I think you're still missing it:

Pre-process runs and part of it's output is this:

bq. -1  findbugs0m 26s  hadoop-common-project/hadoop-kms in trunk has 2 
extant Findbugs warnings.

i.e., trunk has 2 existing findbugs issues

Yetus then starts on it's second pass with the patch in place (the mvninstall 
lines are good indicators where it is).  It then provides this output:

bq. +1  findbugs0m 34s  hadoop-common-project/hadoop-kms generated 0 
new + 0 unchanged - 2 fixed = 0 total (was 2) 

 which indicates the patch fixed the 2 findbugs issues.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes

2016-10-24 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602518#comment-15602518
 ] 

Abhishek Modi commented on HADOOP-13680:


I have updated the PR with suggested changes. [~steve_l] Could you please take 
a look.

> fs.s3a.readahead.range to use getLongBytes
> --
>
> Key: HADOOP-13680
> URL: https://issues.apache.org/jira/browse/HADOOP-13680
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> The {{fs.s3a.readahead.range}} value is measured in bytes, but can be 
> hundreds of KB. Easier to use getLongBytes and set to things like "300k"
> This will be backwards compatible with the existing settings if anyone is 
> using them, because the no-prefix default will still be bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602515#comment-15602515
 ] 

Xiao Chen commented on HADOOP-13669:


I see... It seems I misunderstood the pre-commit on addendum 1. That result was 
{{+1findbugs0m 32s  the patch passed}}.. Sorry about that. 

Appreciate any reviews on addendum 2. (Is it required? Or is jenkins's +1 
suffice here?)

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-10-24 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602506#comment-15602506
 ] 

John Zhuge commented on HADOOP-12718:
-

I will post a question to hdfs-dev.

Unfortunately I don't have access to Windows env. Will solicit community help 
upon the next patch.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602491#comment-15602491
 ] 

Allen Wittenauer commented on HADOOP-13669:
---

bq.  For the patched run, we can see this:

No need to quote the console.  It's in the JIRA message.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-10-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602490#comment-15602490
 ] 

Hadoop QA commented on HADOOP-13166:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13166 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804426/HADOOP-13166-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0a01a9f1f980 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b18f35f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10870/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10870/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 

[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602451#comment-15602451
 ] 

Xiao Chen commented on HADOOP-13669:


The pre-commit -1 is again on the trunk run. For the patched run, we can see 
this:
{noformat}


 findbugs detection: patch



hadoop-common-project/hadoop-kms generated 0 new + 0 unchanged - 2 fixed = 0 
total (was 2)
{noformat}
So it should be fixed this time... Also verified via {{mvn compile 
findbugs:findbugs}} locally.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1, HADOOP-13669.addendem2.patch, 
> HADOOP-13669.addendum.patch, trigger.02.patch, trigger.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >