[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-10-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Summary: Web UI error accessing links which need authorization when 
Kerberos  (was: Web UI authorization error accessing /logs/ when Kerberos)

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-18 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587473#comment-15587473
 ] 

Benoy Antony edited comment on HADOOP-12082 at 10/19/16 3:06 AM:
-

Committed to trunk. Thanks for the contribution [~hgadre].
Could you please upload the patches for branch-2 and branch-2.8 ? There is 
conflict on the pom. Once uploaded,  I will commit them also.




was (Author: benoyantony):
Committed to trunk. Thanks for the contribution [~hgadre].
Could you please upload the patches for branch-2 and branch-2.8 ? I will commit 
them also.



> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-

[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-18 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587473#comment-15587473
 ] 

Benoy Antony commented on HADOOP-12082:
---

Committed to trunk. Thanks for the contribution [~hgadre].
Could you please upload the patches for branch-2 and branch-2.8 ? I will commit 
them also.



> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587383#comment-15587383
 ] 

Hudson commented on HADOOP-12082:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10636 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10636/])
HADOOP-12082 Support multiple authentication schemes via (benoy: rev 
4bca385241c0fc8ff168c7b0f2984a7aed2c7492)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/package-info.java
* (add) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestLdapAuthenticationHandler.java
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandlerUtil.java
* (edit) hadoop-project/pom.xml
* (edit) hadoop-common-project/hadoop-auth/pom.xml
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/MultiSchemeAuthenticationHandler.java
* (add) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestMultiSchemeAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/MultiSchemeDelegationTokenAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestKerberosAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/HttpConstants.java
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/LdapAuthenticationHandler.java
* (add) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/CompositeAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* (edit) hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md
* (add) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/LdapConstants.java


> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop 

[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587369#comment-15587369
 ] 

John Zhuge commented on HADOOP-7352:


Thanks [~xiaochen] for the review and commit! Thanks [~ste...@apache.org] for 
the review and many others for the discussions.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Release Note: Change FileSystem#listStatus contract to never return null. 
Local filesystems prior to 3.0.0 returned null upon accesserror. It is 
considered erroneous. We should expect FileSystem#listStatus to throw 
IOException upon access error.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13693) Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587334#comment-15587334
 ] 

Hudson commented on HADOOP-13693:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10635 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10635/])
HADOOP-13693. Remove the message about HTTP OPTIONS in SPNEGO (xiao: rev 
d75cbc5749808491d2b06f80506d95b6fb1b9e9c)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuthenticationFilter.java


> Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
> kms audit log
> -
>
> Key: HADOOP-13693
> URL: https://issues.apache.org/jira/browse/HADOOP-13693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch
>
>
> For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED 
> ErrorMsg:'Authentication required' message before the OK messages. This is 
> expected, and due to the spnego authentication sequence. (Notice method == 
> {{OPTIONS}})
> {noformat}
> 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS 
> URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt
>  ErrorMsg:'Authentication required'
> 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=0ms] 
> 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=10193ms] 
> {noformat}
> However, admins/auditors see this and can easily get confused/alerted. We 
> should make it obvious this is benign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587333#comment-15587333
 ] 

Hudson commented on HADOOP-7352:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10635 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10635/])
HADOOP-7352. FileSystem#listStatus should throw IOE upon access error. (xiao: 
rev efdf810cf9f72d78e97e860576c64a382ece437c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/web/TestTokenAspect.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithAcls.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathData.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13693) Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log

2016-10-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587311#comment-15587311
 ] 

Xiao Chen edited comment on HADOOP-13693 at 10/19/16 1:31 AM:
--

Committed to trunk. Thanks Andrew, Xiaoyu and Arun for the review and feedback!


was (Author: xiaochen):
Committed to trunk. Thanks Andrew, Xiaoyu and Arun for the feedback!

> Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
> kms audit log
> -
>
> Key: HADOOP-13693
> URL: https://issues.apache.org/jira/browse/HADOOP-13693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch
>
>
> For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED 
> ErrorMsg:'Authentication required' message before the OK messages. This is 
> expected, and due to the spnego authentication sequence. (Notice method == 
> {{OPTIONS}})
> {noformat}
> 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS 
> URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt
>  ErrorMsg:'Authentication required'
> 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=0ms] 
> 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=10193ms] 
> {noformat}
> However, admins/auditors see this and can easily get confused/alerted. We 
> should make it obvious this is benign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-7352:
--
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13693) Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log

2016-10-18 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13693:
---
   Resolution: Fixed
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
Fix Version/s: 3.0.0-alpha2
 Release Note: kms-audit.log used to show an UNAUTHENTICATED message even 
for successful operations, because of the OPTIONS HTTP request during SPNEGO 
initial handshake. This message brings more confusion than help, and has hence 
been removed.
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Andrew, Xiaoyu and Arun for the feedback!

> Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
> kms audit log
> -
>
> Key: HADOOP-13693
> URL: https://issues.apache.org/jira/browse/HADOOP-13693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch
>
>
> For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED 
> ErrorMsg:'Authentication required' message before the OK messages. This is 
> expected, and due to the spnego authentication sequence. (Notice method == 
> {{OPTIONS}})
> {noformat}
> 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS 
> URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt
>  ErrorMsg:'Authentication required'
> 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=0ms] 
> 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=10193ms] 
> {noformat}
> However, admins/auditors see this and can easily get confused/alerted. We 
> should make it obvious this is benign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587286#comment-15587286
 ] 

Xiao Chen commented on HADOOP-7352:
---

Hi [~jzhuge],
Could you please put up a short release note? Thanks.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13693) Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log

2016-10-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587285#comment-15587285
 ] 

Xiaoyu Yao commented on HADOOP-13693:
-

v02 patch LGTM. +1.

> Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
> kms audit log
> -
>
> Key: HADOOP-13693
> URL: https://issues.apache.org/jira/browse/HADOOP-13693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch
>
>
> For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED 
> ErrorMsg:'Authentication required' message before the OK messages. This is 
> expected, and due to the spnego authentication sequence. (Notice method == 
> {{OPTIONS}})
> {noformat}
> 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS 
> URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt
>  ErrorMsg:'Authentication required'
> 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=0ms] 
> 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=10193ms] 
> {noformat}
> However, admins/auditors see this and can easily get confused/alerted. We 
> should make it obvious this is benign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-7352:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed this to trunk.
Thanks [~jzhuge] for the patch, [~ste...@apache.org] and all for the review and 
discussions!

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-10-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587271#comment-15587271
 ] 

Xiao Chen commented on HADOOP-7352:
---

+1 to patch 5, committing.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-18 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587196#comment-15587196
 ] 

Robert Kanter commented on HADOOP-10075:


Thanks [~raviprak] for taking a look.
Here's my responses to your questions/notes:
- On your -2 port question: I was maybe a bit overzelous in updating -1 port 
checks to -2.  I think this should only happen on 
{{ServerConnector.getLocalPort()}}.  I'll change this to only check of -1.
- {{HttpServer2.createDefaultChannelConnector()}} is static, so it doesn't have 
access to the {{Server}} instance unless we pass it.  
- The Javadoc says "use SelectChannelConnector with SslContextFactory".  I also 
couldn't find anything on {{SelectChannelConnector}}, but I am using 
{{SslContextFactory}} with a {{ServerConnector}}.  The fact that 
{{SelectChannelConnector}} isn't hyperlinked in the Javadoc and I can't find a 
class for it makes me think it's a typo or mistake.  {{ServerConnector}}'s 
Javadoc does say that it can work with SSL.  I have also tested SSL and it 
works with the current patch.  I don't remember where, but I did look at some 
examples when figuring out the SSL stuff, and it was done this way.
- On the {{addNoCacheFilter}} change: Yes, I believe this was a bug earlier; 
though I could be wrong.

Given that the -2 port change is trivial, I'll wait on more feedback before 
uploading a new patch version with that change.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13693) Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log

2016-10-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587187#comment-15587187
 ] 

Xiao Chen commented on HADOOP-13693:


Given Andrew's +1 and Arun/Xiaoyu's positive feedback, I plan to commit this 
later today.

> Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
> kms audit log
> -
>
> Key: HADOOP-13693
> URL: https://issues.apache.org/jira/browse/HADOOP-13693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch
>
>
> For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED 
> ErrorMsg:'Authentication required' message before the OK messages. This is 
> expected, and due to the spnego authentication sequence. (Notice method == 
> {{OPTIONS}})
> {noformat}
> 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS 
> URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt
>  ErrorMsg:'Authentication required'
> 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=0ms] 
> 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, 
> accessCount=1, interval=10193ms] 
> {noformat}
> However, admins/auditors see this and can easily get confused/alerted. We 
> should make it obvious this is benign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2016-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587136#comment-15587136
 ] 

Hadoop QA commented on HADOOP-13363:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816966/HADOOP-13363.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 714e389ffa51 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c62ae71 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10826/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10826/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2016-10-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587093#comment-15587093
 ] 

stack commented on HADOOP-13363:


Did a ML discussion happen?

pb3.1.0 is out. It runs in a 2.5.0 compatibility mode by default. Has some 
facility for saving on data copying that might be of interest in the NN. If 
upgrading, you need to run the newer protoc. Newer lib can't read the protos 
made by older protoc (IIRC). Newer protoc, in my experience, has no problem 
digesting pb 2.5.0 .proto files. The generated files are a little different, 
not consumable by the old protobuf lib.

Would this be a problem? Old clients can talk to the new servers because of 
wire compatible. Is anyone consuming hadoop protos directly other than hadoop? 
Are hadoop proto files  considered InterfaceAudience.Private or 
InterfaceAudience.Public? If the former, I could work on a patch for 3.0.0 
(It'd be big but boring). Does Hadoop have Protobuf in its API anywhere (I can 
take a look but being lazy asking here first).

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13400) update the ApplicationClassLoader implementation in line with latest Java ClassLoader implementation

2016-10-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587046#comment-15587046
 ] 

Sangjin Lee commented on HADOOP-13400:
--

Sorry it took a while. +1. I'll commit it to the feature branch shortly.

> update the ApplicationClassLoader implementation in line with latest Java 
> ClassLoader implementation
> 
>
> Key: HADOOP-13400
> URL: https://issues.apache.org/jira/browse/HADOOP-13400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Vrushali C
> Attachments: HADOOP-13400-HADOOP-13070.01.patch, 
> HADOOP-13400-HADOOP-13070.02.patch
>
>
> The current {{ApplicationClassLoader}} implementation is aged, and does not 
> reflect the latest java {{ClassLoader}} implementation. One example is the 
> use of the fine-grained classloading lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586969#comment-15586969
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Ping [~liuml07]. Do you have any updates on this?  

As started to work on HADOOP-13650, just realized that there is no one 
{{MetadataStore}} implementation actually being checked in...I'd much 
appreciate if we can have a patch soon, so we can work on HADOOP-13650 soon. 


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.wip.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586868#comment-15586868
 ] 

Kihwal Lee commented on HADOOP-13731:
-

It's very unlikely a hadoop issue. The java compiler is throwing a 
NullPointerException.
It is hard to believe, although not impossible, the compiler is that broken.  
It could be other conditions (out of memory, etc.) manifesting like this.

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> Other configuration details:
> Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
> Maven: 3.3.9
> Command Line:
>  mvn package -Pdist -DskipTests -Dtar
> Build log:
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at 

[jira] [Updated] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13541:
-
Fix Version/s: 3.0.0-alpha2

> explicitly declare the Joda time version S3A depends on
> ---
>
> Key: HADOOP-13541
> URL: https://issues.apache.org/jira/browse/HADOOP-13541
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13541-branch-2.8-001.patch
>
>
> Different builds of Hadoop are pulling in wildly different versions of Joda 
> time, depending on what other transitive dependencies are involved. Example: 
> 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
> 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
> and find a dependency has got older
> I propose explicitly declaring a dependency on joda-time in s3a, then set the 
> version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13540:
-
Fix Version/s: 3.0.0-alpha2

> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13540-001.patch, HADOOP-13540-branch-2-002.patch, 
> HADOOP-13540-branch-2-003.patch, HADOOP-13540-branch-2-004.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-7363:

Target Version/s: 3.0.0-alpha2  (was: 2.9.0)
   Fix Version/s: (was: 2.9.0)
  3.0.0-alpha2

Looks like this was not committed to branch-2 and only to trunk, so updating 
fix/target versions.

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13546:
-
Fix Version/s: 3.0.0-alpha2

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, 
> HADOOP-13546-HADOOP-13436.007.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13606:
-
Fix Version/s: 3.0.0-alpha2

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13643:
-
Fix Version/s: 3.0.0-alpha2

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13169:
-
Fix Version/s: 3.0.0-alpha2

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch, HADOOP-13169-branch-2-009.patch, 
> HADOOP-13169-branch-2-010.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13544:
-
Fix Version/s: 3.0.0-alpha2

> JDiff reports unncessarily show unannotated APIs and cause confusion while 
> our javadocs only show annotated and public APIs
> ---
>
> Key: HADOOP-13544
> URL: https://issues.apache.org/jira/browse/HADOOP-13544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13544-20160825.txt, HADOOP-13544-20160921.txt
>
>
> Our javadocs only show annotated and @Public APIs (original JIRAs 
> HADOOP-7782, HADOOP-6658).
> But the jdiff shows all APIs that are not annotated @Private. This causes 
> confusion on how we read the reports and what APIs we really broke.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13567) S3AFileSystem to override getStorageStatistics() and so serve up its statistics

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13567:
-
Fix Version/s: (was: 2.9.0)

> S3AFileSystem to override getStorageStatistics() and so serve up its 
> statistics
> ---
>
> Key: HADOOP-13567
> URL: https://issues.apache.org/jira/browse/HADOOP-13567
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Although S3AFileSystem collects lots of statistics, these aren't available 
> programatically as {{getStoragetStatistics() }} isn't overridden.
> It must be overridden and serve up the local FS stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13599:
-
Fix Version/s: 3.0.0-alpha2

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch, HADOOP-13599-branch-2-003.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13663) Index out of range in SysInfoWindows

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13663:
-
Fix Version/s: 3.0.0-alpha2

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13663.000.patch, HADOOP-13663.001.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13164:
-
Fix Version/s: 3.0.0-alpha2

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13164-branch-005.patch, 
> HADOOP-13164-branch-2-003.patch, HADOOP-13164-branch-2-004.patch, 
> HADOOP-13164.branch-2-002.patch, HADOOP-13164.branch-2.WIP.002.patch, 
> HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13566) NPE in S3AFastOutputStream.write

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13566:
-
Fix Version/s: (was: 2.9.0)

> NPE in S3AFastOutputStream.write
> 
>
> Key: HADOOP-13566
> URL: https://issues.apache.org/jira/browse/HADOOP-13566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> During scale tests, managed to create an NPE
> {code}
> test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate)
>   Time elapsed: 2.258 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132)
> {code}
> trace implies that {{buffer == null}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13674:
-
Fix Version/s: 3.0.0-alpha2

> S3A can provide a more detailed error message when accessing a bucket through 
> an incorrect S3 endpoint.
> ---
>
> Key: HADOOP-13674
> URL: https://issues.apache.org/jira/browse/HADOOP-13674
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13674-branch-2.001.patch, 
> HADOOP-13674-branch-2.002.patch
>
>
> When accessing the S3 service through a region-specific endpoint, the bucket 
> must be located in that region.  If the client attempts to access a bucket 
> that is not located in that region, then the service replies with a 301 
> redirect and the correct region endpoint.  However, the exception thrown by 
> S3A does not include the correct endpoint.  If we included that information 
> in the exception, it would make it easier for users to diagnose and fix 
> incorrect configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12667:
-
Fix Version/s: 3.0.0-alpha2

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12667-branch-2-002.patch, 
> HADOOP-12667-branch-2-003.patch, HADOOP-12667-branch-2-004.patch, 
> HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586812#comment-15586812
 ] 

Hadoop QA commented on HADOOP-13732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 54s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834009/HADOOP-13732.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ab815439f88d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b733a6f |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10824/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10824/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10824/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10824/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP 

[jira] [Updated] (HADOOP-13150) Avoid use of toString() in output of HDFS ACL shell commands.

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13150:
-
Fix Version/s: 3.0.0-alpha2

> Avoid use of toString() in output of HDFS ACL shell commands.
> -
>
> Key: HADOOP-13150
> URL: https://issues.apache.org/jira/browse/HADOOP-13150
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13150.001.patch, HADOOP-13150.002.patch
>
>
> The HDFS ACL shell commands have at least one usage of the standard Java 
> {{toString}} method to generate shell output ({{AclEntry#toString}}).  This 
> issue tracks conversion of that code to use methods other than {{toString}}.  
> The {{toString}} method is useful primarily for debugging.  It's preferrable 
> to use a different method to ensure stable output for public interfaces, such 
> as the shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13323) Downgrade stack trace on FS load from Warn to debug

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13323:
-
Fix Version/s: 3.0.0-alpha2

Reminder, please set a 3.x fix version when committing to trunk. Thanks!

> Downgrade stack trace on FS load from Warn to debug
> ---
>
> Key: HADOOP-13323
> URL: https://issues.apache.org/jira/browse/HADOOP-13323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13323-branch-2-001.patch
>
>
> HADOOP-12636 catches exceptions on FS creation, but prints a stack trace @ 
> warn every time..this is noisy and irrelevant if the installation doesn't 
> need connectivity to a specific filesystem or object store.
> I propose: only printing the toString values of the exception chain @ warn; 
> the full stack comes out at debug.
> We could some more tuning: 
> * have a specific log for this exception, which allows installations to turn 
> even the warnings off.
> * add a link to a wiki page listing the dependencies of the shipped 
> filesystems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12977:
-
Fix Version/s: 3.0.0-alpha2

> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13627:
-
Fix Version/s: 3.0.0-alpha2

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13627.01.patch, HADOOP-13627.02.patch, 
> HADOOP-13627.03.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13692:
-
Fix Version/s: 3.0.0-alpha2

> hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent 
> classpath conflicts.
> ---
>
> Key: HADOOP-13692
> URL: https://issues.apache.org/jira/browse/HADOOP-13692
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13692-branch-2.001.patch
>
>
> If an end user's application has a dependency on hadoop-aws and no other 
> Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 
> jars through the AWS SDK.  This can cause conflicts at deployment time, 
> because Hadoop has a dependency on version 2.2.3, and the 2 versions are not 
> compatible with one another.  We can prevent this problem by changing 
> hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the 
> version Hadoop wants.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13710) Supress CachingGetSpaceUsed from logging interrupted exception stacktrace

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13710:
-
Fix Version/s: 3.0.0-alpha2

Reminder, please set a 3.x fix version when committing to trunk. Thanks!

> Supress CachingGetSpaceUsed from logging interrupted exception stacktrace
> -
>
> Key: HADOOP-13710
> URL: https://issues.apache.org/jira/browse/HADOOP-13710
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Hanisha Koneru
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-13710.000.patch
>
>
> CachingGetSpaceUsed thread is typically interrupted when the node is 
> shutdown. Since this is a routine operation, there seems not much value to 
> print the stacktrace of an {{InterruptedException}}.
> {quote}
> 2016-10-11 10:02:25,894 WARN  fs.CachingGetSpaceUsed 
> (CachingGetSpaceUsed.java:run(180)) - Thread Interrupted waiting to refresh 
> disk information
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13565:
-
Fix Version/s: 3.0.0-alpha2

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13686:
-
Fix Version/s: 3.0.0-alpha2

> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13686-branch-2.8.01.patch, HADOOP-13686.01.patch, 
> HADOOP-13686.02.patch, HADOOP-13686.03.patch, HADOOP-13686.04.patch, 
> HADOOP-13686.05.patch, HADOOP-13686.06.patch, HADOOP-13686.07.patch
>
>
> This ticket is opened to track adding the forllowing unit test in 
> hadoop-common. 
> #test users can delete their own trash directory
> #test users can delete an empty directory and the directory is moved to trash
> #test fs.trash.interval with invalid values such as 0 or negative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13560:
-
Fix Version/s: 3.0.0-alpha2

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11244:
-
Fix Version/s: (was: 2.8.0)

> The HCFS contract test testRenameFileBeingAppended doesn't do a rename
> --
>
> Key: HADOOP-11244
> URL: https://issues.apache.org/jira/browse/HADOOP-11244
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Noah Watkins
>Assignee: jay vyas
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11244.patch, HADOOP-11244.patch
>
>
> The test AbstractContractAppendTest::testRenameFileBeingAppended appears to 
> assert the behavior of renaming a file opened for writing. However, the 
> assertion "assertPathExists("renamed destination file does not exist", 
> renamed);" fails because it appears that the file "renamed" is never created 
> (ostensibly it should be the "target" file that has been renamed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-6610) Hadoop conf/ servlet improvements

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-6610:

Fix Version/s: (was: 2.8.0)

> Hadoop conf/ servlet improvements
> -
>
> Key: HADOOP-6610
> URL: https://issues.apache.org/jira/browse/HADOOP-6610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.22.0
>Reporter: Steve Loughran
>Priority: Minor
>
> I'm playing with the conf/ servlet, trying to do a workflow that
> # pulls down the conf servlet from a well known URL (this is trickier when 
> your VMs are dynamic, but possible)
> # saves it locally, using {{}} task
> #  {{}} some info on the machines in the allocated cluster, like their 
> external hostnames
> # SCP in the configuration files, JAR files needed to submit work, 
> # submit work via SSH
> I have to SSH as the VMs have different internal/external addresses; HDFS 
> gets upset.
> Some issues I've found so far
> # It's good to set expires headers on everything; HADOOP-6607 covers that
> # Having sorted conf values makes it easier to locate properties, otherwise 
> you have to save it to a text editor and search around
> # the  option makes things noisy
> # Saving as a java.util.Properties would let me pull these things into a 
> build file or other tool very easily. This is easy to test too.
> # Have a comment at the top listing when the conf was generated, and the 
> hostname. Maybe even make them conf values
> More tricky is the conf options that are dynamic, things like 
> {code}
> dfs.datanode.address0.0.0.0:0
> {code}
> These show what the node was started with, not what it actually got. I am 
> doing a workaround there with my code (setting the actual values in the conf 
> file with {{live.dfs.datanode.address}}, etc, and extracting them that way. I 
> don't want to lose the original values, but do want the real ones



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-3584) Add an explicit HadoopConfigurationException that extends RuntimeException

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-3584:

Fix Version/s: (was: 2.8.0)

> Add an explicit HadoopConfigurationException that extends RuntimeException
> --
>
> Key: HADOOP-3584
> URL: https://issues.apache.org/jira/browse/HADOOP-3584
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.19.0
>Reporter: Steve Loughran
>Priority: Minor
>
> It is possible for a get() or set() operation to throw an exception today, 
> especially if a security manager is blocking property access. As more complex 
> cross-references are used, the likelihood for failure is higher.
> Yet there is no way for a Configuration or subclass to throw an exception 
> today except by throwing a general purpose RuntimeException.
> I propose having a specific HadoopConfigurationException that extends 
> RuntimeException. Classes that read in configurations can explicitly catch 
> and handle these. The exception could
> * be raised on some parse error (a float attribute is not a parseable float, 
> etc)
> * be raised on some error caused by an implementation of a configuration 
> service API
> * wrap underlying errors from different implementations (like JNDI exceptions)
> * wrap security errors and other generic problems
> I'm not going to propose having specific errors for parsing problems versus 
> undefined name,value pair though that may be useful feature creep. It 
> certainly makes bridging from different back-ends trickier. 
> This would not be incompatible with the existing code, at least from my 
> current experiments. What is more likely to cause problems is having the 
> get() operations failing, as that is not something that is broadly tested 
> (yet). If we do want to test it, we could have a custom mock back-end that 
> could be configured to fail on a get() of a specific option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7702) Show the default configuration value in /conf servlet as well

2016-10-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-7702:

Fix Version/s: (was: 2.8.0)

> Show the default configuration value in /conf servlet as well
> -
>
> Key: HADOOP-7702
> URL: https://issues.apache.org/jira/browse/HADOOP-7702
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.21.1
>Reporter: Liyin Tang
>
> HADOOP-6408 has shown all the configuration values in the configuration file 
> in /conf servlet.
> But some of the configuration value do not exist in config file but be set in 
> the code as default value.
> It would be nice to show all these default configuration values in the 
> servlet as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12774:
---
Status: Open  (was: Patch Available)

This needs to be rebased after the commit of HADOOP-13560.

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Status: Open  (was: Patch Available)

Patch 003 really should fix the whitespace warnings that were flagged in the 
last pre-commit run.  I confirmed by running the same check that Yetus runs, 
and I didn't find any whitespace endings remaining in the file.

{code}
grep -n -I --extended-regexp '[[:blank:]]$' 
~/git/hadoop/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
{code}

I do need to rebase this though after the commit of HADOOP-13560.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586728#comment-15586728
 ] 

Hadoop QA commented on HADOOP-13727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 3 
fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
18s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Comment Edited] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-10-18 Thread Jameel Naina Mohamed (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586655#comment-15586655
 ] 

Jameel Naina Mohamed edited comment on HADOOP-13675 at 10/18/16 9:07 PM:
-

Hi [~cnauroth], Thanks for reviewing this patch. 
I have incorporated your comments. 
Reason for TestNativeAzureFileSystemMocked test failed was due to list api 
returned duplicated entries of files in a directory. I made the change to fix 
this issue. Now all tests are passing. 
I have a new patch witt the fix. Could you grant me access to upload patch to 
this jira?


was (Author: jameeln):
Hi [~cnauroth], Thanks for reviewing this patch. 
I have incorporated your comments. 
Reason for TestNativeAzureFileSystemMocked test failed was due to list api 
returned duplicated entries of files in a directory. I made the change to fix 
this issue. Now all tests are passing. 
I'm adding new patch HADOOP-13675.002.patch for your review. 

> Bug in return value for delete() calls in WASB
> --
>
> Key: HADOOP-13675
> URL: https://issues.apache.org/jira/browse/HADOOP-13675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.9.0
>
> Attachments: HADOOP-13675.001.patch
>
>
> Current implementation of WASB does not correctly handle multiple 
> threads/clients calling delete on the same file. The expected behavior in 
> such scenarios is only one of the thread should delete the file and return 
> true, while all other threads should receive false. However in the current 
> implementation even though only one thread deletes the file, multiple clients 
> incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-10-18 Thread Jameel Naina Mohamed (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586655#comment-15586655
 ] 

Jameel Naina Mohamed commented on HADOOP-13675:
---

Hi [~cnauroth], Thanks for reviewing this patch. 
I have incorporated your comments. 
Reason for TestNativeAzureFileSystemMocked test failed was due to list api 
returned duplicated entries of files in a directory. I made the change to fix 
this issue. Now all tests are passing. 
I'm adding new patch HADOOP-13675.002.patch for your review. 

> Bug in return value for delete() calls in WASB
> --
>
> Key: HADOOP-13675
> URL: https://issues.apache.org/jira/browse/HADOOP-13675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.9.0
>
> Attachments: HADOOP-13675.001.patch
>
>
> Current implementation of WASB does not correctly handle multiple 
> threads/clients calling delete on the same file. The expected behavior in 
> such scenarios is only one of the thread should delete the file and return 
> true, while all other threads should receive false. However in the current 
> implementation even though only one thread deletes the file, multiple clients 
> incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586642#comment-15586642
 ] 

Hudson commented on HADOOP-13560:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10632 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10632/])
HADOOP-13560. S3ABlockOutputStream to support huge (many GB) file (stevel: rev 
6c348c56918973fd988b110e79231324a8befe12)
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputDisk.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFastOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesByteBufferBlocks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesArrayBlocks.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ADataBlocks.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATestUtils.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestBlockingThreadPoolExecutorService.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesDiskBlocks.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionFastOutputStream.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContextStatistics.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SemaphoredDelegatingExecutor.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionBlockOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockingThreadPool.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestDataBlocks.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputByteBuffer.java


> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 

[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Patch 016 under HADOOP-13703 was +1'd in, applied to 2.8+.

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586553#comment-15586553
 ] 

Steve Loughran commented on HADOOP-13703:
-

ok, committed to branch-2.8+, including trunk. there's now less diff between 
branch-2 and trunk BTW.

Committed under the HADOOP-13560 version, so there'll be no merge notifications 
in this JIRA.

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Anant Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anant Sharma updated HADOOP-13731:
--
Description: 
I am trying to build Hadoop 2.7.2(direct from the upstream with no 
modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the following 
errors. The result is same with OpenJDK 8 but I switched back to OpenJDK 7 
since its the recommended version. This is critical issue since I am unable to 
move beyond building Hadoop.

Other configuration details:

Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
Maven: 3.3.9

Command Line:

 mvn package -Pdist -DskipTests -Dtar


Build log:

[INFO] Building jar: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
[INFO]
[INFO] 
[INFO] Building Apache Hadoop Common 2.7.2
[INFO] 
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
[INFO] Executed tasks
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
---
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
hadoop-common ---
[WARNING] [svn, info] failed with error code 1
[WARNING] [git, branch] failed with error code 128
[INFO] SCM: NONE
[INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 7 resources
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-common 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 852 source files to 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
An exception has occurred in the compiler (1.7.0_95). Please file a bug at the 
Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
checking the Bug Parade for duplicates. Include your program and the following 
diagnostic in your report.  Thank you.
java.lang.NullPointerException
at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
at com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:669)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genClass(Gen.java:2235)
at com.sun.tools.javac.main.JavaCompiler.genCode(JavaCompiler.java:712)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1451)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1419)
at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:870)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:829)
at com.sun.tools.javac.main.Main.compile(Main.java:439)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:132)
at 

[jira] [Commented] (HADOOP-13733) Support WASB connections through an HTTP proxy server.

2016-10-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586508#comment-15586508
 ] 

Chris Nauroth commented on HADOOP-13733:


We can set a proxy server to use on the Azure Storage requests by calling 
{{OperationContext#setProxy(java.net.Proxy)}}.

https://github.com/Azure/azure-storage-java/blob/v4.2.0/microsoft-azure-storage/src/com/microsoft/azure/storage/OperationContext.java#L400-L409


> Support WASB connections through an HTTP proxy server.
> --
>
> Key: HADOOP-13733
> URL: https://issues.apache.org/jira/browse/HADOOP-13733
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Chris Nauroth
>
> WASB currently does not support use of an HTTP proxy server to connect to the 
> Azure Storage back-end.  The Azure Storage SDK does support use of a proxy, 
> so we can enhance WASB to read proxy settings from configuration and pass 
> them along in the Azure Storage SDK calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13733) Support WASB connections through an HTTP proxy server.

2016-10-18 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13733:
--

 Summary: Support WASB connections through an HTTP proxy server.
 Key: HADOOP-13733
 URL: https://issues.apache.org/jira/browse/HADOOP-13733
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Reporter: Chris Nauroth


WASB currently does not support use of an HTTP proxy server to connect to the 
Azure Storage back-end.  The Azure Storage SDK does support use of a proxy, so 
we can enhance WASB to read proxy settings from configuration and pass them 
along in the Azure Storage SDK calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Attachment: HADOOP-13727-branch-2.003.patch

It's irritating to get whitespace warnings every time a patch touches 
core-default.xml, so here is revision 003 to clean up all of the remaining 
whitespace problems.  If people don't want to +1 this revision because of 
touching unrelated lines of code, then I understand that, and I'll separate it 
to a different JIRA.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586438#comment-15586438
 ] 

Hadoop QA commented on HADOOP-13727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 3 
fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586420#comment-15586420
 ] 

Mike Yoder commented on HADOOP-13732:
-

I'd have to make a dependency-check specific note in BUILDING.txt, which seems 
a little awkard. (The normal build isn't affected, of course.) I'll see what I 
can do. My only alternative idea is a comment around this plugin in pom.xml. I 
do agree it needs to be documented somewhere.

* I don't even think that maven is _available_ on RHEL 6.6
* My RHEL 7.2 machine looks like it would use version 3.0.5-16
* My Ubuntu 16.04 machine is using 3.3.9
* Looks like Ubuntu 14.04 uses 3.0.5-1

The maven release history page is at https://maven.apache.org/docs/history.html



> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Anant Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anant Sharma updated HADOOP-13731:
--
Description: 
I am trying to build Hadoop 2.7.2(direct from the upstream with no 
modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the following 
errors. The result is same with OpenJDK 8 but I switched back to OpenJDK 7 
since its the recommended version. This is critical issue since I am unable to 
move beyond building Hadoop.

Other configuration details:

Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
Maven: 3.3.9

Build log:

[INFO] Building jar: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
[INFO]
[INFO] 
[INFO] Building Apache Hadoop Common 2.7.2
[INFO] 
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
[INFO] Executed tasks
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
---
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
hadoop-common ---
[WARNING] [svn, info] failed with error code 1
[WARNING] [git, branch] failed with error code 128
[INFO] SCM: NONE
[INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 7 resources
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-common 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 852 source files to 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
An exception has occurred in the compiler (1.7.0_95). Please file a bug at the 
Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
checking the Bug Parade for duplicates. Include your program and the following 
diagnostic in your report.  Thank you.
java.lang.NullPointerException
at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
at com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:669)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genClass(Gen.java:2235)
at com.sun.tools.javac.main.JavaCompiler.genCode(JavaCompiler.java:712)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1451)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1419)
at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:870)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:829)
at com.sun.tools.javac.main.Main.compile(Main.java:439)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:132)
at 
org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:126)
   

[jira] [Commented] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Anant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586389#comment-15586389
 ] 

Anant Sharma commented on HADOOP-13731:
---

Hi Mingliang,

Following is the configuration:

JDK: OpenJDK 7
OS: Ubuntu Xenial (16.04)
Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
Maven: 3.3.9

Let me know if you need any other details. I will also ask in the maillist.

Thanks

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
> at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
> at 
> 

[jira] [Commented] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586354#comment-15586354
 ] 

Andrew Wang commented on HADOOP-13732:
--

Hi Mike, if we need to use a more recent version of Maven, then we also need to 
update BUILDING.txt.

Could you comment on the availability of the required Maven version on a few 
common OSs? e.g. RHEL6, 7, Ubuntu 12/14/16.

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586327#comment-15586327
 ] 

Andrew Wang commented on HADOOP-13696:
--

This JIRA is a bit weird, since it means (unlike the other filesystems) SFTP 
won't work out-of-the-box.

Can we split it out into its own artifact like the cloud connectors?

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Status: Patch Available  (was: Open)

Ping [~andrew.wang]

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13732:

Attachment: HADOOP-13732.001.patch

> Upgrade OWASP dependency-check plugin version
> -
>
> Key: HADOOP-13732
> URL: https://issues.apache.org/jira/browse/HADOOP-13732
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-13732.001.patch
>
>
> For reasons I don't fully understand, the current version (1.3.6) of the 
> OWASP dependency-check plugin produces an essentially empty report on trunk 
> (3.0.0).  After some research, it appears that this plugin has undergone 
> significant work in the latest version, 1.4.3. Upgrading to this version 
> produces the expected full report.
> The only gotcha is that a new-ish version of maven is required. I'm using 
> 3.2.2; I know that 3.0.x fails with a strange error.
> This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13732) Upgrade OWASP dependency-check plugin version

2016-10-18 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13732:
---

 Summary: Upgrade OWASP dependency-check plugin version
 Key: HADOOP-13732
 URL: https://issues.apache.org/jira/browse/HADOOP-13732
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Mike Yoder
Assignee: Mike Yoder
Priority: Minor


For reasons I don't fully understand, the current version (1.3.6) of the OWASP 
dependency-check plugin produces an essentially empty report on trunk (3.0.0).  
After some research, it appears that this plugin has undergone significant work 
in the latest version, 1.4.3. Upgrading to this version produces the expected 
full report.

The only gotcha is that a new-ish version of maven is required. I'm using 
3.2.2; I know that 3.0.x fails with a strange error.

This plugin was introduced in HADOOP-13198.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586245#comment-15586245
 ] 

Mingliang Liu commented on HADOOP-13731:


Please ask in u...@hadoop.apache.org maillist with the command line you run, 
software version (e.g. maven). This is not likely a bug. Before that, please 
read through the {{BUILDING.txt}} file in the source package.

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
> at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
> at 
> 

[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586219#comment-15586219
 ] 

Steve Loughran commented on HADOOP-13703:
-

thanks, I'll apply this in.

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13522) Add %A and %a formats for fs -stat command to print permissions

2016-10-18 Thread Alex Garbarini (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586174#comment-15586174
 ] 

Alex Garbarini commented on HADOOP-13522:
-

Thank you, Akira, for your patience and help teaching! I'm already hard at work 
on the next JIRA.

> Add %A and %a formats for fs -stat command to print permissions
> ---
>
> Key: HADOOP-13522
> URL: https://issues.apache.org/jira/browse/HADOOP-13522
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Alex Garbarini
>Assignee: Alex Garbarini
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13522.001.patch, HADOOP-13522.002.patch, 
> HADOOP-13522.003.patch, HADOOP-13522.004.patch, HADOOP-13522.005.patch, 
> HADOOP-13522.006.patch, HADOOP-13522.007.patch, HADOOP-13522.008.patch
>
>
> This patch adds to fs/shell/Stat.java the missing options of %a and %A. 
> FileStatus already contains the getPermission() method required for returning 
> symbolic permissions. FsPermission contains the method to return the binary 
> short, but nothing to present in standard Octal format. 
> Most UNIX admins base their work on such standard octal permissions. Hence, 
> this patch also introduces one tiny method to translate the toShort() return 
> into octal.
> Build has already passed unit tests and javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Attachment: HADOOP-13727-branch-2.002.patch

Thank you, Steve.  I am attaching revision 002 with your suggestions.  I used 
{{}} for the JavaDocs, because the ordering is significant.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12259) Utility to Dynamic port allocation

2016-10-18 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586074#comment-15586074
 ] 

Zhe Zhang commented on HADOOP-12259:


Thanks for the work Brahma. This is a good addition to branch-2.7 and I just 
did the backport.

> Utility to Dynamic port allocation
> --
>
> Key: HADOOP-12259
> URL: https://issues.apache.org/jira/browse/HADOOP-12259
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test, util
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-12259.patch
>
>
> As per discussion in YARN-3528 and [~rkanter] comment [here | 
> https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
>  ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12259) Utility to Dynamic port allocation

2016-10-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12259:
---
Fix Version/s: 2.7.4

> Utility to Dynamic port allocation
> --
>
> Key: HADOOP-12259
> URL: https://issues.apache.org/jira/browse/HADOOP-12259
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test, util
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-12259.patch
>
>
> As per discussion in YARN-3528 and [~rkanter] comment [here | 
> https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
>  ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted

2016-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586015#comment-15586015
 ] 

Hadoop QA commented on HADOOP-13730:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 118 unchanged - 14 fixed = 118 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833976/0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9046dc25efc4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d26a1bb |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10822/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10822/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10822/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> After 5 connection failures, yarn stops sending metrics graphite until 
> restarted
> 

[jira] [Created] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-18 Thread Anant Sharma (JIRA)
Anant Sharma created HADOOP-13731:
-

 Summary: Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with 
JDK 7/8
 Key: HADOOP-13731
 URL: https://issues.apache.org/jira/browse/HADOOP-13731
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.2
 Environment: OS : Ubuntu 16.04 (Xenial)
JDK: OpenJDK 7 and OpenJDK 8
Reporter: Anant Sharma
Priority: Critical


I am trying to build Hadoop 2.7.2(direct from the upstream with no 
modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the following 
errors. The result is same with OpenJDK 8 but I switched back to OpenJDK 7 
since its the recommended version. This is critical issue since I am unable to 
move beyond building Hadoop.


[INFO] Building jar: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
[INFO]
[INFO] 
[INFO] Building Apache Hadoop Common 2.7.2
[INFO] 
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
[mkdir] Created dir: 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
[INFO] Executed tasks
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
---
[INFO]
[INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
hadoop-common ---
[WARNING] [svn, info] failed with error code 1
[WARNING] [git, branch] failed with error code 128
[INFO] SCM: NONE
[INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 7 resources
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-common 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 852 source files to 
/home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
An exception has occurred in the compiler (1.7.0_95). Please file a bug at the 
Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
checking the Bug Parade for duplicates. Include your program and the following 
diagnostic in your report.  Thank you.
java.lang.NullPointerException
at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
at com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:669)
at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
at com.sun.tools.javac.jvm.Gen.genClass(Gen.java:2235)
at com.sun.tools.javac.main.JavaCompiler.genCode(JavaCompiler.java:712)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1451)
at 
com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1419)
at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:870)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:829)
at com.sun.tools.javac.main.Main.compile(Main.java:439)
at 

[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-10-18 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586005#comment-15586005
 ] 

Wei-Chiu Chuang commented on HADOOP-13535:
--

Thanks for the reminder. I cherry picked it to 2.7.x

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Fix For: 2.8.0, 2.7.4
>
> Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch, 
> HADOOP-13535.003.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-10-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13535:
-
Fix Version/s: 2.7.4

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Fix For: 2.8.0, 2.7.4
>
> Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch, 
> HADOOP-13535.003.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted

2016-10-18 Thread Sean Young (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Young updated HADOOP-13730:

Fix Version/s: 2.8.0
   Status: Patch Available  (was: Open)

Keep on trying forever, graphite might come back.

> After 5 connection failures, yarn stops sending metrics graphite until 
> restarted
> 
>
> Key: HADOOP-13730
> URL: https://issues.apache.org/jira/browse/HADOOP-13730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: Sean Young
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 
> 0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch
>
>
> We've had issues in production where metrics stopped. We found the following 
> in the log files:
> 2016-09-02 21:44:32,493 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:03:04,335 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:20:35,436 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Connection timed out
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> 

[jira] [Updated] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted

2016-10-18 Thread Sean Young (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Young updated HADOOP-13730:

Attachment: 0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch

> After 5 connection failures, yarn stops sending metrics graphite until 
> restarted
> 
>
> Key: HADOOP-13730
> URL: https://issues.apache.org/jira/browse/HADOOP-13730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: Sean Young
>Priority: Minor
> Attachments: 
> 0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch
>
>
> We've had issues in production where metrics stopped. We found the following 
> in the log files:
> 2016-09-02 21:44:32,493 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:03:04,335 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:20:35,436 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Connection timed out
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> 

[jira] [Created] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted

2016-10-18 Thread Sean Young (JIRA)
Sean Young created HADOOP-13730:
---

 Summary: After 5 connection failures, yarn stops sending metrics 
graphite until restarted
 Key: HADOOP-13730
 URL: https://issues.apache.org/jira/browse/HADOOP-13730
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.7.2
Reporter: Sean Young
Priority: Minor


We've had issues in production where metrics stopped. We found the following in 
the log files:

2016-09-02 21:44:32,493 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
Error sending metrics to Graphite
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
at java.io.Writer.write(Writer.java:154)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)

2016-09-03 00:03:04,335 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
Error sending metrics to Graphite
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
at java.io.Writer.write(Writer.java:154)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)

2016-09-03 00:20:35,436 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
Error sending metrics to Graphite
java.net.SocketException: Connection timed out
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
at java.io.Writer.write(Writer.java:154)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
at 
org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
2016-09-03 00:22:48,862 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
Error sending 

[jira] [Commented] (HADOOP-13728) S3A can support short user-friendly aliases for configuration of credential providers.

2016-10-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585835#comment-15585835
 ] 

Chris Nauroth commented on HADOOP-13728:


Yes, I was thinking hard-coded aliases, just to cover the most commonly used 
credential providers.  I think an alias configuration file, or anything like 
the file system scheme configuration flexibility, would be over-engineering.  
We aren't seeing custom credential providers as a typical use case so far, so I 
think it's fine to keep using full class names in the rare case that someone 
needs that.

> S3A can support short user-friendly aliases for configuration of credential 
> providers.
> --
>
> Key: HADOOP-13728
> URL: https://issues.apache.org/jira/browse/HADOOP-13728
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> This issue proposes to support configuration of the S3A credential provider 
> chain using short aliases to refer to the common credential providers in 
> addition to allowing full class names.  Supporting short aliases would 
> provide a simpler operations experience for the most common cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585797#comment-15585797
 ] 

Chris Nauroth commented on HADOOP-13703:


At this point, I am +1 for the branch-2 patch.  The remaining pre-commit 
warnings are acceptable.  The whitespace warnings are either unrelated or 
easily fixed with {{git apply --whitespace=fix}}.  The javac warning is due to 
a deprecation that isn't part of this patch, so I'm not sure why it was flagged.

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13660) Upgrade commons-configuration version

2016-10-18 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13660:
---
Attachment: HADOOP-13660-configuration2.001.patch

Just as an update, attaching what I've been working on. This is after working 
through the migration guide and linked documentation. This compiles and should 
be reasonably close to how it was before. Currently TestMetricsConfig is 
failing and it appears the root problem is a difference in how config files are 
located: some files that used to be found automatically now can't be found. 
Strategies for location files are no configurable, but even configuring it to 
try every strategy, it's still failing to find files that previous tests seemed 
to find just fine. Will continue looking at it...

> Upgrade commons-configuration version
> -
>
> Key: HADOOP-13660
> URL: https://issues.apache.org/jira/browse/HADOOP-13660
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13660-configuration2.001.patch, 
> HADOOP-13660.001.patch
>
>
> We're currently pulling in version 1.6 - I think we should upgrade to the 
> latest 1.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13660) Upgrade commons-configuration version

2016-10-18 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13660:
---
Status: Open  (was: Patch Available)

> Upgrade commons-configuration version
> -
>
> Key: HADOOP-13660
> URL: https://issues.apache.org/jira/browse/HADOOP-13660
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13660.001.patch
>
>
> We're currently pulling in version 1.6 - I think we should upgrade to the 
> latest 1.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13729) switch to Configuration.getLongBytes for byte options

2016-10-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13729:
---

 Summary: switch to Configuration.getLongBytes for byte options
 Key: HADOOP-13729
 URL: https://issues.apache.org/jira/browse/HADOOP-13729
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


It's fiddly working out how many bytes to use for 128MB readahead, equally hard 
to work out what a value actually means.

If we switch to {{Configuration.getLongBytes()}} for reading in readahead, 
partition and threshold values, all existing configs will work, but new configs 
can use K, M, G suffices.

Easy to code; should add a new test/adapt existing ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585029#comment-15585029
 ] 

Kai Zheng commented on HADOOP-11798:


The native code looks good overall. One minor is, please clean up the 
unnecessary {{#include}} statements in the *.c files.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13728) S3A can support short user-friendly aliases for configuration of credential providers.

2016-10-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584945#comment-15584945
 ] 

Steve Loughran commented on HADOOP-13728:
-

These would be hard-coded, right? Not some alias configuration file?

> S3A can support short user-friendly aliases for configuration of credential 
> providers.
> --
>
> Key: HADOOP-13728
> URL: https://issues.apache.org/jira/browse/HADOOP-13728
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> This issue proposes to support configuration of the S3A credential provider 
> chain using short aliases to refer to the common credential providers in 
> addition to allowing full class names.  Supporting short aliases would 
> provide a simpler operations experience for the most common cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584941#comment-15584941
 ] 

Steve Loughran commented on HADOOP-13727:
-

LGTM

Could you have that 1) 2) 3) list in both the docs and md formatted for each. 
that is  for the javadocs and 1. 2. for markdown?

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13726) Enforce that FileSystem initializes only a single instance of the requested FileSystem.

2016-10-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584932#comment-15584932
 ] 

Steve Loughran commented on HADOOP-13726:
-

Guava has some caching stuff too; it may be simpler to reuse than than 
repurpose some of our own code

> Enforce that FileSystem initializes only a single instance of the requested 
> FileSystem.
> ---
>
> Key: HADOOP-13726
> URL: https://issues.apache.org/jira/browse/HADOOP-13726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> The {{FileSystem}} cache is intended to guarantee reuse of instances by 
> multiple call sites or multiple threads.  The current implementation does 
> provide this guarantee, but there is a brief race condition window during 
> which multiple threads could perform redundant initialization.  If the file 
> system implementation has expensive initialization logic, then this is 
> wasteful.  This issue proposes to eliminate that race condition and guarantee 
> initialization of only a single instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13522) Add %A and %a formats for fs -stat command to print permissions

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584721#comment-15584721
 ] 

Hudson commented on HADOOP-13522:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10629 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10629/])
HADOOP-13522. Add %A and %a formats for fs -stat command to print (aajisaka: 
rev bedfec0c10144087168bc79501ffd5ab4fa52606)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java


> Add %A and %a formats for fs -stat command to print permissions
> ---
>
> Key: HADOOP-13522
> URL: https://issues.apache.org/jira/browse/HADOOP-13522
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Alex Garbarini
>Assignee: Alex Garbarini
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13522.001.patch, HADOOP-13522.002.patch, 
> HADOOP-13522.003.patch, HADOOP-13522.004.patch, HADOOP-13522.005.patch, 
> HADOOP-13522.006.patch, HADOOP-13522.007.patch, HADOOP-13522.008.patch
>
>
> This patch adds to fs/shell/Stat.java the missing options of %a and %A. 
> FileStatus already contains the getPermission() method required for returning 
> symbolic permissions. FsPermission contains the method to return the binary 
> short, but nothing to present in standard Octal format. 
> Most UNIX admins base their work on such standard octal permissions. Hence, 
> this patch also introduces one tiny method to translate the toShort() return 
> into octal.
> Build has already passed unit tests and javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584723#comment-15584723
 ] 

Hudson commented on HADOOP-13061:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10629 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10629/])
HADOOP-13061. Refactor erasure coders. Contributed by Kai Sasaki (kai.zheng: 
rev c023c748869063fb67d14ea996569c42578d1cea)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/DummyErasureCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractHHErasureCodingStep.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/DummyErasureDecoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHErasureCodingStep.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/package-info.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/codec/TestHHXORErasureCodec.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/HHXORErasureCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHXORErasureEncodingStep.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHXORErasureDecoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCodecRawCoderMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCodingStep.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestRSErasureCoder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHXORErasureDecodingStep.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodecOptions.java
* (edit) 

[jira] [Updated] (HADOOP-13061) Refactor erasure coders

2016-10-18 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13061:
---
Attachment: HADOOP-13061.19.patch

Uploaded the committed revision (with a minor cleaning) in case needed.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch, HADOOP-13061.17.patch, 
> HADOOP-13061.18.patch, HADOOP-13061.19.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13061) Refactor erasure coders

2016-10-18 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13061:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

The latest patch LGTM and +1. Committed to trunk branch.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch, HADOOP-13061.17.patch, 
> HADOOP-13061.18.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13522) Add %A and %a formats for fs -stat command to print permissions

2016-10-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13522:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~algarbar] for the contribution!

> Add %A and %a formats for fs -stat command to print permissions
> ---
>
> Key: HADOOP-13522
> URL: https://issues.apache.org/jira/browse/HADOOP-13522
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Alex Garbarini
>Assignee: Alex Garbarini
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13522.001.patch, HADOOP-13522.002.patch, 
> HADOOP-13522.003.patch, HADOOP-13522.004.patch, HADOOP-13522.005.patch, 
> HADOOP-13522.006.patch, HADOOP-13522.007.patch, HADOOP-13522.008.patch
>
>
> This patch adds to fs/shell/Stat.java the missing options of %a and %A. 
> FileStatus already contains the getPermission() method required for returning 
> symbolic permissions. FsPermission contains the method to return the binary 
> short, but nothing to present in standard Octal format. 
> Most UNIX admins base their work on such standard octal permissions. Hence, 
> this patch also introduces one tiny method to translate the toShort() return 
> into octal.
> Build has already passed unit tests and javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >