[jira] [Updated] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-07 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13696:

Status: Patch Available  (was: Open)

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-07 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15557302#comment-15557302
 ] 

Yuanbo Liu commented on HADOOP-13696:
-

uploaded v1 patch

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-07 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13696:

Attachment: HADOOP-13696.001.patch

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-07 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-13696:
---

Assignee: Yuanbo Liu

> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the Hadoop-compatible file systems shipped with Hadoop.

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15557075#comment-15557075
 ] 

Hadoop QA commented on HADOOP-13687:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 60 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-cloud-storage-project/hadoop-cloud-storage 
hadoop-cloud-storage-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1283/HADOOP-13687-trunk.003.patch
 |
| Optional Tests |  asflicense  findbugs  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  checkstyle  |
| uname | Linux 7340e580deb5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool 

[jira] [Commented] (HADOOP-13691) remove build user and date from various hadoop UI

2016-10-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556924#comment-15556924
 ] 

Allen Wittenauer commented on HADOOP-13691:
---

bq. What I'm curious about is whether this information is verifiable. 

For non-ASF builds, the answer doesn't really matter.  It's helpful to the 
community to have this information as part of the build.  Removing it would be 
very anti-community.  For ASF builds, we can do some things to make it 
verifiable if that's actually a concern. (e.g., require that the PGP key being 
used to sign matches use...@apache.org)

Just to be clear: I don't care about the web UIs.  No one I know actually uses 
them for admin work anyway (too slow, too cluttered, lack key information, not 
programmable, APIs aren't stable, etc, etc), especially after 2.4 nuked the 
hell out of HDFS.  But yanking it from 'hadoop version' is a terrible idea.  
It's key information.

> remove build user and date from various hadoop UI
> -
>
> Key: HADOOP-13691
> URL: https://issues.apache.org/jira/browse/HADOOP-13691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Priority: Minor
>
> Currently in the namenode UI as well as the resource manager UI, we display 
> the date of the build as well as the user id of the person who built it. 
> Although other bits of information is useful (e.g. git commit it, branch, 
> etc.), the value of the build date and user is suspect. We should consider 
> removing them from the visible UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556901#comment-15556901
 ] 

Allen Wittenauer commented on HADOOP-13700:
---

bq.  public API, specifically TrashPolicy.getInstance() has been changed in an 
incompatible way. 

The API compatibility guidelines allow for the API to change in a major release.

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556887#comment-15556887
 ] 

Kai Zheng commented on HADOOP-12082:


Hi [~hgadre],

Sorry for the inconvenience and the late reply (in PRC holiday). 

bq. Specifically I need to add unit tests to verify the LDAP authentication 
functionality. 
Do these tests relate to Kerberos or not? Or basically they need an LDAP 
backend, instead of a KDC, right?

bq. Can we use the LdapBackend provided by Apache Kerby for this usecase? Or 
should I initialize the DirectoryService API for my unit tests?
It depends on what these tests actually need. If they just use an LDAP server, 
I thought you could have some options, like the DirectoryService API. The Kerby 
LdapBackend is only for the Kerby KDC situation so if you don't need a KDC, 
then it's not good for it.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent 

[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11798:
---
Assignee: SammiChen  (was: Kai Zheng)

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556836#comment-15556836
 ] 

Kai Zheng commented on HADOOP-12090:


I'm wondering if this could be reproduced upon the trunk, with the updated 
MiniKDC.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556824#comment-15556824
 ] 

Kai Zheng commented on HADOOP-11798:


Hi [~andrew.wang],

Good catch on this. It's good to make XOR codec work and we'll check to see if 
any gap exists. Will also resume the work here and update the patch. Thanks!

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12733) Remove references to obsolete io.seqfile configuration variables

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556819#comment-15556819
 ] 

Hadoop QA commented on HADOOP-12733:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
48s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832218/HADOOP-12733.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 450df2bdefd0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e57fa81 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10706/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-tools/hadoop-sls U: . |
| 

[jira] [Commented] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556739#comment-15556739
 ] 

Xiao Chen commented on HADOOP-13699:


Thanks for finding and fixing this Andrew. I'm +1 on the return-fast change, 
and looking at HADOOP-6871 it's fixing a bug that's not considered in the 
original implementation. Ideally we should let people from either jira comment, 
in case they feel differently.

And man HADOOP-6871 is 4+ years ago. I hope no one has been waiting for that 
for the past half-decade. :) Mind linking the jira (I didn't find a 
{{replaces}} option, maybe {{supercedes}}?) and comment there, so this is 
change is more advertised?

> Configuration does not substitute multiple references to the same var
> -
>
> Key: HADOOP-13699
> URL: https://issues.apache.org/jira/browse/HADOOP-13699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13699.001.patch
>
>
> Config var loop detection was originally introduced by HADOOP-6871. Due to 
> cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
> multiple references to the same variable no longer resolved, e.g.
> {noformat}
> somekey = "${otherkey} ${otherkey}"
> {noformat}
> This loop detection business is fragile, expensive, and not in branch-2, so 
> let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-07 Thread Haibo Chen (JIRA)
Haibo Chen created HADOOP-13700:
---

 Summary: Incompatible changes in TrashPolicy 
 Key: HADOOP-13700
 URL: https://issues.apache.org/jira/browse/HADOOP-13700
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0-alpha1
Reporter: Haibo Chen
Assignee: Andrew Wang
Priority: Critical
 Fix For: 3.0.0-alpha2


TrashPolicy is marked as public & evolving, but its public API, specifically 
TrashPolicy.getInstance() has been changed in an incompatible way. 
1) The path parameter is removed in 3.0
2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the Hadoop-compatible file systems shipped with Hadoop.

2016-10-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13687:
---
Attachment: HADOOP-13687-trunk.003.patch
HADOOP-13687-branch-2.003.patch

I'm attaching revision 003 patches for trunk and branch-2, showing the 
structure Steve suggested in his last comment.

{code}
hadoop-cloud-storage-project
|- hadoop-azure-datalake
`- hadoop-cloud-storage
{code}

The trunk patch now looks huge because of {{git mv 
hadoop-tools/hadoop-azure-datalake hadoop-cloud-storage-project}}.  The 
branch-2 patch is still small, because {{hadoop-azure-datalake}} doesn't exist 
there.

One thing that wasn't clear to me is if people are suggesting a change to just 
the source layout or also the distro layout.  Would we move the jars out of 
share/hadoop/tools and into a new share/hadoop/cloud-storage directory?  It 
would be a backward-incompatible change, and I don't think it would add much 
value, so I haven't made that change in this revision.  If anyone wants to 
lobby hard for a change in the distro layout, then we'll need additional 
changes to introduce a {{hadoop-cloud-storage-dist}} module, with 
{{hadoop-project-dist}} as its parent, the {{hadoop.component}} property set to 
{{cloud-storage}}, and a new {{cloud-storage.xml}} descriptor file under 
{{hadoop-assemblies}}.

bq. I think you could be more aggressive about the dependencies of the 
openstack stuff; I suspect there is stuff there which could/should be tagged as 
scope=provided, so tuning down the transitiveness more.

I haven't gone any further yet with this.  Right now, the only additional 
dependency that clients of {{hadoop-cloud-storage}} sweep in transitively is 
commons-httpclient 3.1, which is required until we break that dependency 
(tracked elsewhere in another JIRA).  I really wanted to get rid of that 
test-jar dependency though.

bq. Allen Wittenauer there's no chance of Yetus doing a mvn dependencies > 
target/dependencies.txt operation on any patch which does poms? Or perhaps we 
add the policy: all patches which update dependencies must attached the changed 
dependency graph

I think this could potentially become a feature request for Yetus pre-commit to 
run {{mvn dependency:list}} before and after the patch and diff the results.  
If anything changes, it could render a -0 in the report (not blocking the 
patch, but flagging that the dependency changes are worth further review).

> Provide a unified dependency artifact that transitively includes the 
> Hadoop-compatible file systems shipped with Hadoop.
> 
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556665#comment-15556665
 ] 

Mingliang Liu commented on HADOOP-13697:


There is tests for ToolRunner and LogLevel. No new tests needed.

Test failure is not related, and is tracked by [HDFS-10985].

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch, HADOOP-13697.001.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13691) remove build user and date from various hadoop UI

2016-10-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556659#comment-15556659
 ] 

Sangjin Lee commented on HADOOP-13691:
--

bq. It's very useful to be able use to 'hadoop version' to determine the date 
and user who built it when dealing with custom builds. From an ASF perspective, 
as soon as we start doing releases correctly again, it'll be a quick way to 
determine who the RE was for a given release.

I understand user and date do give us more information. What I'm curious about 
is whether this information is verifiable. For example, git commit id's are 
pretty concrete and it would be easy to verify that the binary matches the said 
git commit id. However, I don't think user and date give you really verifiable 
information. The user is a local system user id which may have no verifiable 
relation to the actual release manager, right?

> remove build user and date from various hadoop UI
> -
>
> Key: HADOOP-13691
> URL: https://issues.apache.org/jira/browse/HADOOP-13691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Priority: Minor
>
> Currently in the namenode UI as well as the resource manager UI, we display 
> the date of the build as well as the user id of the person who built it. 
> Although other bits of information is useful (e.g. git commit it, branch, 
> etc.), the value of the build date and user is suspect. We should consider 
> removing them from the visible UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-10-07 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556662#comment-15556662
 ] 

Thomas Poepping commented on HADOOP-13344:
--

Hey Allen, re #1:

I wrote it wrongly. The intention was for 
"${HADOOP_USE_BUILTIN_SLF4J_BINDING:-true}", which will set the variable to 
"true" if it is currently unset [1]

I'll look into resolving your comments for #2.

[1] 
http://stackoverflow.com/questions/27445455/what-does-the-colon-dash-mean-in-bash

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556658#comment-15556658
 ] 

Daniel Templeton commented on HADOOP-10075:
---

Damn, that's a patch.  Thanks for pushing that boulder up the mountain.  Let's 
get it reviewed before it rolls back down again.

Taking a first pass at reviewing.  Here are some initial comments:

* {{RequestLoggerFilter}} is an example, so it would be nice to add some 
javadoc with a {{@deprecated}} tag to explain why and what should be used 
instead.  I think that's generally true for all the classes you deprecated.
* It would be good to log something here: {code}// Jetty doesn't like the 
same path spec mapping to different servlets, so
// if there's already a mapping for this pathSpec, remove it and assume that
// the newest one is the one we want
final ServletMapping[] servletMappings =
webAppContext.getServletHandler().getServletMappings();
for (int i = 0; i < servletMappings.length; i++) {
  if (servletMappings[i].containsPathSpec(pathSpec)) {
ServletMapping[] newServletMappings =
ArrayUtil.removeFromArray(servletMappings, servletMappings[i]);
webAppContext.getServletHandler()
.setServletMappings(newServletMappings);
break;
  }
}{code}
* Can you get away with a diamond operator here?{code}for 
(Map.Entry e
: defaultContexts.entrySet()) {
   if (e.getValue()) {
 ...
   }
}{code}
* You have a few casts, like this: {code}ServerConnector c = 
(ServerConnector) webServer.getConnectors()[index];{code}  I don't think you're 
supposed to have the space between the parens around the type and the target.
* I'm not a fan of ternary operators, but this patch is big enough that I'm 
willing to let it slide this time. :)
* Is it safe to remove class without letting them sit around deprecated for a 
release?
* You took out the space before the semicolon everywhere else, so you shouldn't 
add it here: {code}  @Produces({MediaType.APPLICATION_JSON + "; 
charset=utf-8"}){code}
* Please add space around the operators here:{code}  
http_config.setRequestHeaderSize(1024*64);
  http_config.setResponseHeaderSize(1024*64);{code}
* What's the story here? {code}-  @Test(timeout = 5000)
+  @Test(timeout = 1){code} Extending timeouts isn't usually the right 
answer...
* I think the spacing could be better here:{code}Log.getLog().warn(
"Job end notification couldn't parse configured proxy's port "
   + portConf + ". Not going to use a proxy");{code}
* Might it be worthwhile defining {{";charset=utf-8"}} as a constant?
* Since you're messing with {code}Log.getLog().info(" == alloc " + 
allocatedContainerCount
+ " it left " + iterationsLeft){code} would you mind making that 
log message actually make sense?
* At great risk to my personal safety, I will suggest that it would be nice to 
add messages to the asserts you're touching in the test classes, e.g.{code}
assertEquals(MediaType.APPLICATION_XML_TYPE + "; charset=utf-8",
response.getType().toString());{code}
* You should probably be more specific here:{code}  // Once 
${hbase-compatible-hadoop.version} is changed to Hadoop 3,
  // we should be able to get rid of this.{code}  Get rid of what and 
how?
* {{TimelineReaderServer.setupOptions()}} should have a javadoc header.



> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12733) Remove references to obsolete io.seqfile configuration variables

2016-10-07 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12733:

Attachment: HADOOP-12733.002.patch

Rebased against trunk

> Remove references to obsolete io.seqfile configuration variables
> 
>
> Key: HADOOP-12733
> URL: https://issues.apache.org/jira/browse/HADOOP-12733
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12733.001.patch, HADOOP-12733.002.patch
>
>
> The following variables appear to no longer be used.
>   io.seqfile.lazydecompress
>   io.seqfile.sorter.recordlimit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556600#comment-15556600
 ] 

Hadoop QA commented on HADOOP-13697:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13697 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832209/HADOOP-13697.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux beec0a043165 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e57fa81 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10705/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10705/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10705/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>   

[jira] [Updated] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13699:
-
Attachment: HADOOP-13699.001.patch

Patch attached. This only attempts to catch the simple case of a variable 
resolving to itself. Support for detecting multiple variable loops is removed.

Solving this generally is like detecting a loop in a linked list. Since this is 
performance sensitive code, I think we'd rather not do that.

> Configuration does not substitute multiple references to the same var
> -
>
> Key: HADOOP-13699
> URL: https://issues.apache.org/jira/browse/HADOOP-13699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13699.001.patch
>
>
> Config var loop detection was originally introduced by HADOOP-6871. Due to 
> cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
> multiple references to the same variable no longer resolved, e.g.
> {noformat}
> somekey = "${otherkey} ${otherkey}"
> {noformat}
> This loop detection business is fragile, expensive, and not in branch-2, so 
> let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-07 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-13699:


 Summary: Configuration does not substitute multiple references to 
the same var
 Key: HADOOP-13699
 URL: https://issues.apache.org/jira/browse/HADOOP-13699
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Critical


Config var loop detection was originally introduced by HADOOP-6871. Due to 
cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
multiple references to the same variable no longer resolved, e.g.

{noformat}
somekey = "${otherkey} ${otherkey}"
{noformat}

This loop detection business is fragile, expensive, and not in branch-2, so 
let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556446#comment-15556446
 ] 

Mingliang Liu commented on HADOOP-13446:


+1 for backporting to {{branch-2.8}}. Thanks.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556436#comment-15556436
 ] 

Xiao Chen commented on HADOOP-13669:


Thanks [~sacharya] for revving. Looks good to me. We're just logging a debug 
log on any exception before throwing. This would make debugging problems a lot 
easier while not have much impact on normal usages.

Findbugs warnings are false alarm, since the exception is thrown.
+1 pending the below to nits (I'm the human checkstyle wizard):
- Line 198: s/{{} catch (Exception e){}}/{{} catch (Exception e) {}}/g
- Line 282: s/{{}catch (Exception e) {}}/{{} catch (Exception e) {}}/g

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch, HADOOP-13369.patch.1
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-10-07 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556430#comment-15556430
 ] 

Aaron Fabbri commented on HADOOP-13447:
---

+1 (non-binding)... IMO branch-2.8 should follow most of s3a development.

> Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> -
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch, HADOOP-13447.003.patch, 
> HADOOP-13447.004.patch, HADOOP-13447.005.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-10-07 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556427#comment-15556427
 ] 

Aaron Fabbri commented on HADOOP-13446:
---

+1 (non-binding).. This seems like a maintenance and testing win.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13697:
---
Attachment: HADOOP-13697.001.patch

{code}
$ hadoop daemonlog
Usage: Command options are:
[-getlevel   [-protocol (http|https)]
[-setlevel[-protocol (http|https)]

Generic options supported are
-conf  specify an application configuration file
-D 

[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556413#comment-15556413
 ] 

Mingliang Liu commented on HADOOP-13697:


Thanks for your confirmation. Please kindly review v1 patch.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556394#comment-15556394
 ] 

Hadoop QA commented on HADOOP-13697:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13697 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832198/HADOOP-13697.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09e4712ed5ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e853be |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10704/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10704/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop 

[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-10-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556379#comment-15556379
 ] 

Chris Nauroth commented on HADOOP-13446:


I'd like to propose merging this change back to branch-2.8 to simplify the 
process of back-porting patches and avoid merge conflicts.  I'll wait a few 
days for comments in case anyone objects.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-10-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556380#comment-15556380
 ] 

Chris Nauroth commented on HADOOP-13447:


I'd like to propose merging this change back to branch-2.8 to simplify the 
process of back-porting patches and avoid merge conflicts.  I'll wait a few 
days for comments in case anyone objects.

> Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> -
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch, HADOOP-13447.003.patch, 
> HADOOP-13447.004.patch, HADOOP-13447.005.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556316#comment-15556316
 ] 

Wei-Chiu Chuang commented on HADOOP-13697:
--

Thanks for the correction :)

I think it's beneficial to use ToolRunner in case that the user wants to run 
this command against a different cluster. I thought I did use ToolRunner but 
apparently not.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13697:
---
Status: Patch Available  (was: Open)

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13697:
---
Attachment: HADOOP-13697.000.patch

Thanks [~jojochuang] for your prompt comment.

{quote}
The exception is already caught is main() and rethrown. 
{quote}
I guess you're talking about {{Cli#run}} instead of {{main()}}? Please see the 
v0 patch.

Another point is that, should we use ToolRunner to call the {{Cli#run()}} 
instead of calling it directly? In that way, the ToolRunner can provided some 
features like setting {{CallerContext}}, accepting 
[Generic_Options|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options]
 (do we plan to support that) in LogLevel? 

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556239#comment-15556239
 ] 

Hadoop QA commented on HADOOP-13669:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-common-project/hadoop-kms: The patch 
generated 0 new + 1 unchanged - 5 fixed = 1 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-common-project/hadoop-kms generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-kms |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At KMS.java:is not 
thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:[line 169] |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int)  At KMS.java:is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int)  At KMS.java:[line 501] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832190/HADOOP-13369.patch.1 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b704f8164ac0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556235#comment-15556235
 ] 

Hudson commented on HADOOP-13627:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10569 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10569/])
HADOOP-13627. Have an explicit KerberosAuthException for UGI to throw, (xiao: 
rev 2e853be6577a5b98fd860e6d64f89ca6d160514a)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KerberosAuthException.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UGIExceptionMessages.java


> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Fix For: 2.9.0
>
> Attachments: HADOOP-13627.01.patch, HADOOP-13627.02.patch, 
> HADOOP-13627.03.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-10-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13627:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.8.
Thanks a lot [~ste...@apache.org] for creating the issue, and reviewing the 
patches!

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Fix For: 2.9.0
>
> Attachments: HADOOP-13627.01.patch, HADOOP-13627.02.patch, 
> HADOOP-13627.03.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-07 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13651 started by Aaron Fabbri.
-
> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556179#comment-15556179
 ] 

Andrew Wang commented on HADOOP-11798:
--

I think this one is pretty important for XOR performance.

[~drankye], related question, do we have an EC policy for XOR? This would be 
nice for small clusters, since right now the smallest stripe width is (3,2) 
which requires at least 5 racks. An XOR (2,1) would only require 3.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556176#comment-15556176
 ] 

Hadoop QA commented on HADOOP-13698:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13698 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832192/HADOOP-13698.01.patch 
|
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux d68ba7716fe5 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3565c9a |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10703/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> 

[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11798:
-
Labels: hdfs-ec-3.0-must-do  (was: )

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1470#comment-1470
 ] 

Allen Wittenauer edited comment on HADOOP-13397 at 10/7/16 7:58 PM:


bq. It means our docker image only includes ASF-licensed binaries

That's not true.  The OS image contains minimally glibc and various other GPL 
components.  


was (Author: aw):
bq. It means our docker image only includes ASF-licensed binaries

That's not true.  The OS image contains minimally the Linux kernel which is GPL.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13698:
---
Status: Patch Available  (was: Open)

> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at 

[jira] [Updated] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13698:
---
Attachment: HADOOP-13698.01.patch

> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at 

[jira] [Created] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-07 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13698:
--

 Summary: Document caveat for KeyShell when underlying KeyProvider 
does not delete a key
 Key: HADOOP-13698
 URL: https://issues.apache.org/jira/browse/HADOOP-13698
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, kms
Affects Versions: 2.8.0
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Minor


For cases like:
{noformat}
$ hadoop key create d
d has not been created. java.io.IOException: HTTP status [500], exception 
[DuplicateKeyException], message [Key with name "d" already exists in 
"KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
enable.] 
java.io.IOException: HTTP status [500], exception 
[KeyProvider$DuplicateKeyException], message [Key with name "d" already exists 
in "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
enable.] 
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
at 
org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
$ hadoop key delete d
You are about to DELETE all versions of  key d from KeyProvider 
KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
Deleting key: d from KeyProvider: 
KMSClientProvider[http://localhost:16000/kms/v1/]
d has not been deleted. java.io.IOException: Key named d was already deleted 
but is disabled. Use purge to destroy all traces or undelete to reactivate.
java.io.IOException: Key named d was already deleted but is disabled. Use purge 
to destroy all traces or undelete to reactivate.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
at 
org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
$ hadoop key create d
d has not been created. java.io.IOException: HTTP status [500], exception 
[KeyProvider$DuplicateKeyException], message [Key with name "d" already exists 
in "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
enable.] 
java.io.IOException: HTTP status [500], exception 
[KeyProvider$DuplicateKeyException], message [Key with name "d" already exists 
in "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
enable.] 
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
at 
org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-07 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13669:
---
Attachment: HADOOP-13369.patch.1

* Fixed Checkstyle.
* Made the error messages to be debug level

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch, HADOOP-13369.patch.1
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556004#comment-15556004
 ] 

Wei-Chiu Chuang commented on HADOOP-13697:
--

Good catch [~liuml07]. The exception is already caught is main() and rethrown. 
I think the exception should not be rethrown. Instead, print the error message 
and return an error code.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.

2016-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1993#comment-1993
 ] 

Hudson commented on HADOOP-13692:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10567 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10567/])
HADOOP-13692. hadoop-aws should declare explicit dependency on Jackson 2 
(cnauroth: rev 69620f955997250d1b543d86d4907ee50218152a)
* (edit) hadoop-tools/hadoop-aws/pom.xml


> hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent 
> classpath conflicts.
> ---
>
> Key: HADOOP-13692
> URL: https://issues.apache.org/jira/browse/HADOOP-13692
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13692-branch-2.001.patch
>
>
> If an end user's application has a dependency on hadoop-aws and no other 
> Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 
> jars through the AWS SDK.  This can cause conflicts at deployment time, 
> because Hadoop has a dependency on version 2.2.3, and the 2 versions are not 
> compatible with one another.  We can prevent this problem by changing 
> hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the 
> version Hadoop wants.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1953#comment-1953
 ] 

Mingliang Liu commented on HADOOP-13697:


[~jojochuang] any thoughts? Thanks.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-07 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13697:
--

 Summary: LogLevel#main throws exception if no arguments provided
 Key: HADOOP-13697
 URL: https://issues.apache.org/jira/browse/HADOOP-13697
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{code}
root@b9ab37566005:/# hadoop daemonlog

Usage: General options are:
[-getlevel   [-protocol (http|https)]
[-setlevel[-protocol (http|https)]

Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: No 
arguments specified
at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
{code}

I think we can catch the exception in the main method, and dump a log error 
message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.

2016-10-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13692:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thank you, Steve.  I committed this to trunk, branch-2 and branch-2.8.

> hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent 
> classpath conflicts.
> ---
>
> Key: HADOOP-13692
> URL: https://issues.apache.org/jira/browse/HADOOP-13692
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13692-branch-2.001.patch
>
>
> If an end user's application has a dependency on hadoop-aws and no other 
> Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 
> jars through the AWS SDK.  This can cause conflicts at deployment time, 
> because Hadoop has a dependency on version 2.2.3, and the 2 versions are not 
> compatible with one another.  We can prevent this problem by changing 
> hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the 
> version Hadoop wants.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1805#comment-1805
 ] 

Wei-Chiu Chuang commented on HADOOP-13684:
--

Test failure is unrelated.

> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1794#comment-1794
 ] 

Hadoop QA commented on HADOOP-13684:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 4 unchanged - 9 fixed = 4 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 19s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13684 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832150/HADOOP-13684.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 44e56d540bb2 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3059b25 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10701/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10701/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10701/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> 

[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Patch Available  (was: Open)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12611:
---
Fix Version/s: (was: 2.9.0)
   3.0.0-alpha2
   2.8.0

Sure.  Also committed to branch-2.8!

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch, HADOOP-12611.005.patch, 
> HADOOP-12611.006.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1750#comment-1750
 ] 

Eric Badger commented on HADOOP-12611:
--

[~rkanter], can we commit this to 2.8 as well? The cherry-pick is clean and 
this is where I originally saw the failure. 

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Eric Badger
> Fix For: 2.9.0
>
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch, HADOOP-12611.005.patch, 
> HADOOP-12611.006.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1746#comment-1746
 ] 

Eric Badger commented on HADOOP-12611:
--

Thanks, [~rkanter]!

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Eric Badger
> Fix For: 2.9.0
>
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch, HADOOP-12611.005.patch, 
> HADOOP-12611.006.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13231) Isolate test path used by a few S3A tests for more reliable parallel execution.

2016-10-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1681#comment-1681
 ] 

Chris Nauroth commented on HADOOP-13231:


bq. HADOOP-13614 fixes this

Agreed.  Thank you, Steve.

> Isolate test path used by a few S3A tests for more reliable parallel 
> execution.
> ---
>
> Key: HADOOP-13231
> URL: https://issues.apache.org/jira/browse/HADOOP-13231
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Steve Loughran
>Priority: Minor
>
> I have noticed a few more spots in S3A tests that do not make use of the 
> isolated test directory path when running in parallel mode.  While I don't 
> have any evidence that this is really causing problems for parallel test runs 
> right now, it would still be good practice to clean these up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1677#comment-1677
 ] 

Hudson commented on HADOOP-12611:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10565 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10565/])
HADOOP-12611. TestZKSignerSecretProvider#testMultipleInit occasionally 
(rkanter: rev c183b9de8d072a35dcde96a20b1550981f886e86)
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java


> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Eric Badger
> Fix For: 2.9.0
>
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch, HADOOP-12611.005.patch, 
> HADOOP-12611.006.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1661#comment-1661
 ] 

Xiao Chen commented on HADOOP-13684:


Nice improvement Wei-Chiu. +1 pending jenkins.

> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13684:
-
Attachment: HADOOP-13684.002.patch

Attach v2 patch to clean up checkstyle warning. The test failure is unrelated.

> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12611:
---
   Resolution: Fixed
 Assignee: Eric Badger  (was: Wei-Chiu Chuang)
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~ebadger].  Committed to trunk and branch-2!

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Eric Badger
> Fix For: 2.9.0
>
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch, HADOOP-12611.005.patch, 
> HADOOP-12611.006.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-12774:
---

Assignee: Steve Loughran

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12774:

Status: Patch Available  (was: Open)

Created a PR which uses the current user *at time of FS initialization*  
shortname for the user and group  on all FileStatus instances created during 
the FS life. 

Testing: all of s3a tests against s3a ireland, and the command line:

{code}
$ ./hadoop fs -ls s3a://hwdev-steve-ireland/
Found 1 items
drwxrwxrwx   - stevel stevel  0 2016-10-07 17:29 
s3a://hwdev-steve-ireland/tests
{code}



> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1552#comment-1552
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

GitHub user steveloughran opened a pull request:

https://github.com/apache/hadoop/pull/136

HADOOP-12774 use UGI.currentUser for user and group of s3a objects

This patch grabs the UGI current user shortname in the FS initialize call, 
then uses that as the user and group for all filestatus instances generated.

```
$ ./hadoop fs -ls s3a://hwdev-steve-ireland/
Found 1 items
drwxrwxrwx   - stevel stevel  0 2016-10-07 17:29 
s3a://hwdev-steve-ireland/tests
```


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/steveloughran/hadoop s3/HADOOP-12774-username

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #136


commit a58ed24a4b618fd3349eac40ad5900cfbd83faa3
Author: Steve Loughran 
Date:   2016-10-07T16:30:25Z

HADOOP-12774 use UGI.currentUser for user and group of s3a objects




> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1470#comment-1470
 ] 

Allen Wittenauer commented on HADOOP-13397:
---

bq. It means our docker image only includes ASF-licensed binaries

That's not true.  The OS image contains minimally the Linux kernel which is GPL.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13310) S3A reporting of file group as empty is harmful to compatibility for the shell.

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13310:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> S3A reporting of file group as empty is harmful to compatibility for the 
> shell.
> ---
>
> Key: HADOOP-13310
> URL: https://issues.apache.org/jira/browse/HADOOP-13310
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> S3A does not persist group information in file metadata.  Instead, it stubs 
> the value of the group to an empty string.  Although the JavaDocs for 
> {{FileStatus#getGroup}} indicate that empty string is a possible return 
> value, this is likely to cause compatibility problems.  Most notably, shell 
> scripts that expect to be able to perform positional parsing on the output of 
> things like {{hadoop fs -ls}} will stop working if retargeted from HDFS to 
> S3A.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12878) Impersonate hosts in s3a for better data locality handling

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1436#comment-1436
 ] 

Steve Loughran commented on HADOOP-12878:
-

Following on from this: know that hive actively scans for the word "localhost" 
when querying block locations, and then interprets that as "anywhere"  it 
doesn't request locality in any job submissions.

> Impersonate hosts in s3a for better data locality handling
> --
>
> Key: HADOOP-12878
> URL: https://issues.apache.org/jira/browse/HADOOP-12878
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>
> Currently, {{localhost}} is passed as locality for each block, causing all 
> blocks involved in job to initially target the same node (RM), before being 
> moved by the scheduler (to a rack-local node). This reduces parallelism for 
> jobs (with short-lived mappers). 
> We should mimic Azures implementation: a config setting 
> {{fs.s3a.block.location.impersonatedhost}} where the user can enter the list 
> of hostnames in the cluster to return to {{getFileBlockLocations}}. 
> Possible optimization: for larger systems, it might be better to return N 
> (5?) random hostnames to prevent passing a huge array (the downstream code 
> assumes size = O(3)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13696:
---

 Summary: change hadoop-common dependency scope of jsch to provided.
 Key: HADOOP-13696
 URL: https://issues.apache.org/jira/browse/HADOOP-13696
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.7.3
Reporter: Steve Loughran
Priority: Minor


The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
downstream. Marking it as "provided" would mean that it would only be needed by 
those programs which wanted the SFTP filesystem, and, if they wanted to use a 
different jsch version, there'd be no maven problems




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1374#comment-1374
 ] 

Hadoop QA commented on HADOOP-13061:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} root: The patch generated 0 new + 194 unchanged - 24 
fixed = 194 total (was 218) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832130/HADOOP-13061.14.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c153befe397d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ebd4f39 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10700/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10700/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 

[jira] [Commented] (HADOOP-13278) S3AFileSystem mkdirs does not need to validate parent path components

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1361#comment-1361
 ] 

Steve Loughran commented on HADOOP-13278:
-

I'm thinking we need to be able to recognise the problem of IAM permissions 
blocking read/write of a parent path, and simply stopping work there. One 
issue, now that we are doing all parent dir deletes as a single DELETE call, 
we'll need to recognise a partial failure of the operation due to security 
checks as a successful operation.

> S3AFileSystem mkdirs does not need to validate parent path components
> -
>
> Key: HADOOP-13278
> URL: https://issues.apache.org/jira/browse/HADOOP-13278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools
>Reporter: Adrian Petrescu
>Priority: Minor
>
> According to S3 semantics, there is no conflict if a bucket contains a key 
> named {{a/b}} and also a directory named {{a/b/c}}. "Directories" in S3 are, 
> after all, nothing but prefixes.
> However, the {{mkdirs}} call in {{S3AFileSystem}} does go out of its way to 
> traverse every parent path component for the directory it's trying to create, 
> making sure there's no file with that name. This is suboptimal for three main 
> reasons:
>  * Wasted API calls, since the client is getting metadata for each path 
> component 
>  * This can cause *major* problems with buckets whose permissions are being 
> managed by IAM, where access may not be granted to the root bucket, but only 
> to some prefix. When you call {{mkdirs}}, even on a prefix that you have 
> access to, the traversal up the path will cause you to eventually hit the 
> root bucket, which will fail with a 403 - even though the directory creation 
> call would have succeeded.
>  * Some people might actually have a file that matches some other file's 
> prefix... I can't see why they would want to do that, but it's not against 
> S3's rules.
> I've opened a pull request with a simple patch that just removes this portion 
> of the check. I have tested it with my team's instance of Spark + Luigi, and 
> can confirm it works, and resolves the aforementioned permissions issue for a 
> bucket on which we only had prefix access.
> This is my first ticket/pull request against Hadoop, so let me know if I'm 
> not following some convention properly :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13222:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13221) s3a create() doesn't check for a parent path being a file

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1342#comment-1342
 ] 

Steve Loughran commented on HADOOP-13221:
-

Moved to S3a Phase III, marking as a dependent of HADOOP-13695. We could make 
this another scheduled future of a block write

> s3a create() doesn't check for a parent path being a file
> -
>
> Key: HADOOP-13221
> URL: https://issues.apache.org/jira/browse/HADOOP-13221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Rajesh Balamohan
>
> Seen in a code review. Notable that if true, this got by all the FS contract 
> tests —showing we missed a couple.
> {{S3AFilesystem.create()}} does not examine its parent paths to verify that 
> there does not exist one which is a file. It looks for the destination path 
> if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check 
> the parent for not being a file, or the parent of that path.
> It must go up the tree, verifying that either a path does not exist, or that 
> the path is a directory. The scan can stop at the first entry which is is a 
> directory, thus the operation is O(empty-directories) and not O(directories).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13695) S3A to use a thread pool for async path operations

2016-10-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13695:
---

 Summary: S3A to use a thread pool for async path operations
 Key: HADOOP-13695
 URL: https://issues.apache.org/jira/browse/HADOOP-13695
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


S3A path operations are often slow due to directory scanning, mock directory 
create/delete, etc. Many of these can be done asynchronously

* because deletion is eventually consistent, deleting parent dirs after an 
operation has returned doesn't alter the behaviour, except in the special case 
of : operation failure.
* scanning for paths/parents of a file in the create operation only needs to 
complete before the close() operation instantiates the object, no need to block 
create().
* parallelized COPY calls would permit asynchronous rename.

We could either use the thread pool used for block writes, or somehow isolate 
low cost path ops (GET, DELETE) from the more expensive calls (COPY, PUT) so 
that a thread doing basic IO doesn't block for the duration of the long op. 
Maybe also use {{Semaphore.tryAcquire()}} and only start async work if there 
actually is an idle thread, doing it synchronously if not. Maybe it depends on 
the operation. path query/cleanup before/after a write is something which could 
be scheduled as just more futures to schedule in the block write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13221) s3a create() doesn't check for a parent path being a file

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13221:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a create() doesn't check for a parent path being a file
> -
>
> Key: HADOOP-13221
> URL: https://issues.apache.org/jira/browse/HADOOP-13221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Rajesh Balamohan
>
> Seen in a code review. Notable that if true, this got by all the FS contract 
> tests —showing we missed a couple.
> {{S3AFilesystem.create()}} does not examine its parent paths to verify that 
> there does not exist one which is a file. It looks for the destination path 
> if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check 
> the parent for not being a file, or the parent of that path.
> It must go up the tree, verifying that either a path does not exist, or that 
> the path is a directory. The scan can stop at the first entry which is is a 
> directory, thus the operation is O(empty-directories) and not O(directories).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13231) Isolate test path used by a few S3A tests for more reliable parallel execution.

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13231.
-
Resolution: Duplicate
  Assignee: Steve Loughran  (was: Chris Nauroth)

HADOOP-13614 fixes this

> Isolate test path used by a few S3A tests for more reliable parallel 
> execution.
> ---
>
> Key: HADOOP-13231
> URL: https://issues.apache.org/jira/browse/HADOOP-13231
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Steve Loughran
>Priority: Minor
>
> I have noticed a few more spots in S3A tests that do not make use of the 
> isolated test directory path when running in parallel mode.  While I don't 
> have any evidence that this is really causing problems for parallel test runs 
> right now, it would still be good practice to clean these up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11572:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a delete() operation fails during a concurrent delete of child entries
> 
>
> Key: HADOOP-11572
> URL: https://issues.apache.org/jira/browse/HADOOP-11572
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11572-001.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a 
> child entry during a recursive directory delete is propagated as an 
> exception, rather than ignored as a detail which idempotent operations should 
> just ignore.
> the exception should be caught and, if a file not found problem, logged 
> rather than propagated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13516.
-
  Resolution: Cannot Reproduce
Target Version/s:   (was: 2.8.0)

closing as cannot reproduce. If this surfaces, test against the latest Hadoop 
release, then, if it occurs there, please re-open

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Steve Loughran
>Priority: Minor
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13061) Refactor erasure coders

2016-10-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13061:

Attachment: HADOOP-13061.14.patch

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1012#comment-1012
 ] 

Steve Loughran commented on HADOOP-13692:
-

Looking at the spark stuff, I'm explicitly adding in (and blaming on aws SDK 
10.6+), another jackson entry

{code}
  
com.fasterxml.jackson.dataformat
jackson-dataformat-cbor
${fasterxml.jackson.version}
  
{code}

Looking into HADOOP-13050; looks like it comes in after AWS SDK 10.6, so not 
relevant.

+1


> hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent 
> classpath conflicts.
> ---
>
> Key: HADOOP-13692
> URL: https://issues.apache.org/jira/browse/HADOOP-13692
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13692-branch-2.001.patch
>
>
> If an end user's application has a dependency on hadoop-aws and no other 
> Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 
> jars through the AWS SDK.  This can cause conflicts at deployment time, 
> because Hadoop has a dependency on version 2.2.3, and the 2 versions are not 
> compatible with one another.  We can prevent this problem by changing 
> hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the 
> version Hadoop wants.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13506) Redundant groupid warning in child projects

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1004#comment-1004
 ] 

Steve Loughran commented on HADOOP-13506:
-

+1

this has irritated me for a while when using IDEA to look at POMs

> Redundant groupid warning in child projects
> ---
>
> Key: HADOOP-13506
> URL: https://issues.apache.org/jira/browse/HADOOP-13506
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: HADOOP-13506.01.patch
>
>
> Some child projects defines groupId (org.apache.hadoop) redundantly which is 
> already defined in parent project. In addition, IDEs are throwing warning 
> message like 
> {code}
> Definition of groupId is redundant, because it's inherited from the parent
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13691) remove build user and date from various hadoop UI

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554994#comment-15554994
 ] 

Steve Loughran commented on HADOOP-13691:
-

I think it's good on some about/diagnostics page; doesn't need to be 
everywhere. But it is good to know at a glance when something was built 
up...particularly if anyone is doing their own personal builds

> remove build user and date from various hadoop UI
> -
>
> Key: HADOOP-13691
> URL: https://issues.apache.org/jira/browse/HADOOP-13691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Priority: Minor
>
> Currently in the namenode UI as well as the resource manager UI, we display 
> the date of the build as well as the user id of the person who built it. 
> Although other bits of information is useful (e.g. git commit it, branch, 
> etc.), the value of the build date and user is suspect. We should consider 
> removing them from the visible UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554989#comment-15554989
 ] 

Steve Loughran commented on HADOOP-13627:
-

LGTM

+1

thanks for this; we will all appreciate it downstream

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Attachments: HADOOP-13627.01.patch, HADOOP-13627.02.patch, 
> HADOOP-13627.03.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12977:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554961#comment-15554961
 ] 

Steve Loughran commented on HADOOP-12977:
-

branch-2.8 was addressed by cherry picking the HADOOP-13164 faster delete of 
fake directories patch; delivers the speedup there and eliminates the diff 
between the two branches.

> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-10-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13164:

Target Version/s: 2.8.0  (was: 2.9.0)

cherry picked into 2.8 and adjusted fix version appropriately; simplified 
merging of delete patch in

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13164-branch-005.patch, 
> HADOOP-13164-branch-2-003.patch, HADOOP-13164-branch-2-004.patch, 
> HADOOP-13164.branch-2-002.patch, HADOOP-13164.branch-2.WIP.002.patch, 
> HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554947#comment-15554947
 ] 

Hudson commented on HADOOP-12977:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10563 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10563/])
HADOOP-12977 s3a to handle delete("/", true) robustly. Contributed by (stevel: 
rev ebd4f39a393e5fa9a810c6a36b749549229a53df)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRootDirectoryTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554897#comment-15554897
 ] 

Steve Loughran commented on HADOOP-12977:
-

Patch applied to trunk and branch-2; the trunk patch was simply done by 
skipping the s3 test; git lets you do this with ease

{code}
git apply -3 --verbose --whitespace=fix --exclude 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java
  HADOOP-12977-branch-2-006.patch 
{code}

Branch-2.8 is another matter; I will do that quickly and attach under here again

> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12977) s3a to handle delete("/", true) robustly

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554894#comment-15554894
 ] 

Steve Loughran commented on HADOOP-12977:
-

Patch applied to trunk and branch-2; the trunk patch was simply done by 
skipping the s3 test; git lets you do this with ease

{code}
git apply -3 --verbose --whitespace=fix --exclude 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java
  HADOOP-12977-branch-2-006.patch 
{code}

> s3a to handle delete("/", true) robustly
> 
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12977-001.patch, HADOOP-12977-002.patch, 
> HADOOP-12977-branch-2-002.patch, HADOOP-12977-branch-2-002.patch, 
> HADOOP-12977-branch-2-003.patch, HADOOP-12977-branch-2-004.patch, 
> HADOOP-12977-branch-2-005.patch, HADOOP-12977-branch-2-006.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the Hadoop-compatible file systems shipped with Hadoop.

2016-10-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554878#comment-15554878
 ] 

Steve Loughran commented on HADOOP-13687:
-

(this is actually a comment on patch-1; didn't hit submit in time, so some of 
the comments are probably obsolete)

# I like the idea, as for, say, SPARK-7481 it'd simplify tracking an expanding 
list of object stores.
# I'd prefer the name {{hadoop-cloud-storage}}. Why? (a) it's what it is, and 
(b) avoids us having to reject patches related to adding clients of external 
filesystems, *ones not tested in ASF releases*. 
# I see the appeal of AW's suggest of a new source tree. Maybe we could start 
with {{hadoop-cloud-storage/hadoop-cloud-storage}} for this, move the trunk 
only hadoop-adl work in there, and move the others (azure, openstack, aws) at 
our leisure.
# I think you could be more aggressive about the dependencies of the openstack 
stuff; I suspect there is stuff there which could/should be tagged as 
scope=provided, so tuning down the transitiveness more.
# [~aw] there's no chance of Yetus doing a mvn dependencies > 
target/dependencies.txt operation on any patch which does poms? Or perhaps we 
add the policy: all patches which update dependencies must attached the changed 
dependency graph

> Provide a unified dependency artifact that transitively includes the 
> Hadoop-compatible file systems shipped with Hadoop.
> 
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-trunk.001.patch, 
> HADOOP-13687-trunk.002.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13690) Fix typos in core-default.xml

2016-10-07 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554770#comment-15554770
 ] 

Yiqun Lin commented on HADOOP-13690:


Thanks [~brahmareddy] for the review and commit!

> Fix typos in core-default.xml
> -
>
> Key: HADOOP-13690
> URL: https://issues.apache.org/jira/browse/HADOOP-13690
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13690.001.patch
>
>
> There are three typos in file {{core-default.xml}}:
> * In {{hadoop.user.group.static.mapping.overrides}}: in otherwords-> in other 
> words
> * In {{hadoop.security.group.mapping.ldap.search.group.hierarchy.levels}}: 
> exectue->execute
> * In {{hadoop.security.dns.interface}}: subsitution->substitution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13506) Redundant groupid warning in child projects

2016-10-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13506:

Status: Patch Available  (was: Open)

> Redundant groupid warning in child projects
> ---
>
> Key: HADOOP-13506
> URL: https://issues.apache.org/jira/browse/HADOOP-13506
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: HADOOP-13506.01.patch
>
>
> Some child projects defines groupId (org.apache.hadoop) redundantly which is 
> already defined in parent project. In addition, IDEs are throwing warning 
> message like 
> {code}
> Definition of groupId is redundant, because it's inherited from the parent
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13506) Redundant groupid warning in child projects

2016-10-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13506:

Status: Open  (was: Patch Available)

> Redundant groupid warning in child projects
> ---
>
> Key: HADOOP-13506
> URL: https://issues.apache.org/jira/browse/HADOOP-13506
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: HADOOP-13506.01.patch
>
>
> Some child projects defines groupId (org.apache.hadoop) redundantly which is 
> already defined in parent project. In addition, IDEs are throwing warning 
> message like 
> {code}
> Definition of groupId is redundant, because it's inherited from the parent
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554268#comment-15554268
 ] 

Tsuyoshi Ozawa commented on HADOOP-13397:
-

[~aw] I got a comment in LEGAL-270:
{quote}
All official Apache releases are in source form; Docker images can be built 
from our official source releases, but are derived distributions analogous to, 
say, Debian packages.
{quote}

Binary distribution itself seems to be fine as you know. In addition to that, I 
checked the behaviour of Docker. it seems that Docker command try to download 
binary images described in "FROM" state in parallel. It means our docker image 
only includes ASF-licensed binaries, if we only includes compiled binary which 
passed release voting. Do you have any concern?

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554268#comment-15554268
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-13397 at 10/7/16 6:23 AM:
--

[~aw] got a comment in LEGAL-270:
{quote}
All official Apache releases are in source form; Docker images can be built 
from our official source releases, but are derived distributions analogous to, 
say, Debian packages.
{quote}

Binary distribution itself seems to be fine as you know. In addition to that, I 
checked the behaviour of Docker. it seems that Docker command try to download 
binary images described in "FROM" state in parallel. It means our docker image 
only includes ASF-licensed binaries, if we only includes compiled binary which 
passed release voting. Do you have any concern?


was (Author: ozawa):
[~aw] I got a comment in LEGAL-270:
{quote}
All official Apache releases are in source form; Docker images can be built 
from our official source releases, but are derived distributions analogous to, 
say, Debian packages.
{quote}

Binary distribution itself seems to be fine as you know. In addition to that, I 
checked the behaviour of Docker. it seems that Docker command try to download 
binary images described in "FROM" state in parallel. It means our docker image 
only includes ASF-licensed binaries, if we only includes compiled binary which 
passed release voting. Do you have any concern?

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13691) remove build user and date from various hadoop UI

2016-10-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554248#comment-15554248
 ] 

Allen Wittenauer commented on HADOOP-13691:
---

bq.  the value of the build date and user is suspect

Completely disagree.

It's very useful to be able use to 'hadoop version' to determine the date and 
user who built it when dealing with custom builds.  From an ASF perspective, as 
soon as we start doing releases correctly again, it'll be a quick way to 
determine who the RE was for a given release.

> remove build user and date from various hadoop UI
> -
>
> Key: HADOOP-13691
> URL: https://issues.apache.org/jira/browse/HADOOP-13691
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Priority: Minor
>
> Currently in the namenode UI as well as the resource manager UI, we display 
> the date of the build as well as the user id of the person who built it. 
> Although other bits of information is useful (e.g. git commit it, branch, 
> etc.), the value of the build date and user is suspect. We should consider 
> removing them from the visible UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org