[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Description: 
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
invoke {{HttpServer2#hasAdministratorAccess}}.

{{getAuthType}} means to get the authorization scheme of this request

  was:
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{hadoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{hadoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.


> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-11 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082-001.patch

[~kai.zhang] Thanks for the feedback. The requirement is to setup a LDAP server 
for unit testing. After reading the docs for the kerby ldap-backend, I figured 
that it won't be useful. So now I have added back the ApacheDS dependencies 
(only for unit testing).

[~benoyantony] Can you please review the patch? I have added docs as well...

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082.patch, 
> hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567529#comment-15567529
 ] 

Sangjin Lee commented on HADOOP-12090:
--

The patch adds references to {{SocketAcceptor}} and {{SocketSessionConfig}} 
which are classes in Mina. Since these are new direct references, I added the 
explicit dependency.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567076#comment-15567076
 ] 

Sangjin Lee edited comment on HADOOP-12090 at 10/12/16 4:27 AM:


Just to be clear, this issue is caused because Mina (the networking stack on 
which ApacheDS depends) does set the send and receive buffer size to 1 KB (see 
DIRSERVER-2074 for more detail). If we move away from that behavior by using 
different libraries or else, the problem may go away.


was (Author: sjlee0):
Just to be clear, this issue is caused because Mina (the networking stack on 
which ApacheDS depends) does set the send and receive buffer size to 1 KB (see 
DIRSERVER-2074 more detail). If we move away from that behavior either by using 
different libraries, the problem might go away.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567405#comment-15567405
 ] 

Hadoop QA commented on HADOOP-11798:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-11798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832652/HADOOP-11798-v2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  findbugs  checkstyle  |
| uname | Linux 460e2a959ffc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b84c489 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10738/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10738/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10738/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10738/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567345#comment-15567345
 ] 

Akira Ajisaka commented on HADOOP-13708:


LGTM, +1. Hi [~andrew.wang], would you double check this?

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Ding Fei
>Assignee: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567337#comment-15567337
 ] 

Kai Sasaki commented on HADOOP-13061:
-

[~drankye] Sure, I can. Thanks.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567332#comment-15567332
 ] 

Kai Zheng commented on HADOOP-13061:


Hi [~lewuathe] could you help proceed with the following updates? Please let me 
know if anything I can help. Thanks!

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567307#comment-15567307
 ] 

SammiChen commented on HADOOP-11798:


I have followed "stop progress" and "submit patch". Hope we can have the 
Jenkins building soon. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-11798:
---
Status: Patch Available  (was: Open)

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11798 stopped by SammiChen.
--
> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567292#comment-15567292
 ] 

Hadoop QA commented on HADOOP-13061:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
52s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 52s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} root: The patch generated 0 new + 194 unchanged - 24 
fixed = 194 total (was 218) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832805/HADOOP-13061.16.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ba8e75716bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b84c489 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10736/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10736/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10736/artifact/patchprocess/patch-compile-root.txt
 |
| mvnsite | 

[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567276#comment-15567276
 ] 

Kai Zheng commented on HADOOP-11798:


Thank you [~Sammi] for the checking and working on that.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567259#comment-15567259
 ] 

Kai Zheng commented on HADOOP-11798:


Thank you Andrew! I will bookmark the HADOOP Build link.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567257#comment-15567257
 ] 

Kai Zheng commented on HADOOP-13061:


Hi [~andrew.wang],

Thanks for your thoughts! Yes the legacy coder is from HDFS-RAID implementation 
and we have it when starting with our HDFS-EC development at the very 
beginning. It's a good question to ask whether we should keep and maintain the 
coder or not. In the past I have discussed with [~zhz] for some times and my 
preference would be to keep the coder and make the codec work if it wouldn't 
involve too much overhead, for some reasons like: 1) having the coder surely 
doesn't mean we can migrate the HDFS-RAID file system data directly but it's 
possible with some quick-written tools using the coder. The coder logic doesn't 
couple with HDFS specific (either HDFS-RAID blocks or HDFS-EC strip) and what 
it can do is to encode/decode a group of input buffers (and thus a group of 
blocks if repeatedly called). 2) for performance comparison. AFAIK HDFS-RAID 
wasn't rare to be mentioned/discussed when talking about HDFS erasure coding 
things. 3) it'd be a good sample to illustrate that even for the most often 
mentioned RS algorithm, it's good to have different implementation and codecs 
for it. 4) if we don't want to use it in HDFS side, it's ok because all the 
coder/codec logics are in Hadoop common side. I'm wondering if it's good to 
consider that, Hadoop erasure coder/codec framework can develop independently 
and be used elsewhere.

When I said we implement a new erasure codec for rs-legacy, it doesn't mean a 
lots of work since we already have the underlying raw coder implementations. It 
means to be consistent as we did for the xor, rs-default and hhxor codecs. The 
codec doesn't have to be used by HDFS or we can ignore it in HDFS side at all.

Sound good? Thanks.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-11 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567232#comment-15567232
 ] 

John Zhuge commented on HADOOP-12090:
-

Thanks [~sjlee0] for the clarification. Increasing socket buffer size to 64K 
does fix my issues!

BTW, what is the purpose of change to {{hadoop-minikdc/pom.xml}} in patch 002?

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567193#comment-15567193
 ] 

Kai Sasaki commented on HADOOP-13061:
-

[~drankye]
Ah sorry I attached wrong JIRA ticket. I intended to paste HADOOP-13665. But 
anyway it seems unrelated with your explanation. Thanks.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567181#comment-15567181
 ] 

Andrew Wang commented on HADOOP-11798:
--

Hi Kai, I manually triggered the build just now, you should have access too as 
a committer: https://builds.apache.org/job/PreCommit-HADOOP-Build/

I think in this case though, the issue is that the JIRA is in "In Progress" 
state rather than "Patch Available". [~Sammi] could you "stop progress" and 
then "submit patch"? Only the JIRA assignee can "stop progress".

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567172#comment-15567172
 ] 

Andrew Wang commented on HADOOP-13061:
--

Somewhat unrelated question related to rs-legacy: it's from the Facebook 
HDFS-RAID implementation right? My recollection is that we added rs-legacy to 
support migration from HDFS-RAID to the HDFS-7285 implementation without 
copying.

However, as it is right now, that zero-copy migration is not possible. 
HDFS-RAID is not striped, it's at the block level. Also, HDFS-RAID combines an 
entire directory of files, whereas HDFS-7285 is on a single file. So, I don't 
think we can natively handle HDFS-RAID files without a lot more work.

[~drankye], if this is accurate, do you think it's still worth keeping the 
rs-legacy codec around?

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567152#comment-15567152
 ] 

Kai Zheng commented on HADOOP-13061:


bq. Does it mean the issue which will be solved in HADOOP-13685?
Nope. rs-legacy is a different erasure codec but to be implemented using the 
corresponding RS legacy raw coder. We can fire a new issue to address this, 
considering this refactoring is already quite large.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13061:
---
Attachment: HADOOP-13061.16.patch

Updated the patch adding the missed changes.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch, HADOOP-13061.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567129#comment-15567129
 ] 

Kai Zheng commented on HADOOP-11798:


The Jenkins building doesn't trigger, probably because of the archived fix 
version. I can't get rid of it. [~andrew.wang] do you have any idea? Thanks.

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11798:
---
Fix Version/s: (was: 3.0.0-alpha2)

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567126#comment-15567126
 ] 

Ding Fei commented on HADOOP-13708:
---

Do I need to do anything more to this JIRA and the pull request 
[https://github.com/apache/hadoop/pull/140] on github? I'm not familiar with 
the workflow in this community! Thanks!

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Ding Fei
>Assignee: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11798:
---
Fix Version/s: 3.0.0-alpha2

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-10-11 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13565:
---
Description: 
In KerberosAuthenticationHandler#authenticate, we use canonicalized server name 
derived from HTTP request to build server SPN and authenticate client. This can 
be problematic if the HTTP client/server are running from a non-local Kerberos 
realm that the local realm has trust with (e.g., NN UI).

For example, 
The server is running its HTTP endpoint using SPN from the client realm:
hadoop.http.authentication.kerberos.principal
HTTP/_HOST/TEST.COM

When client sends request to namenode at http://NN1.example.com:50070 from 
client.test@test.com.

The client talks to KDC first and gets a service ticket 
HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
negotiation. 

The authentication will end up with either no valid credential error or 
checksum failure depending on the HTTP client naming resolution or HTTP Host 
field from the request header provided by the browser. 

The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
always return a SPN with local realm (HTTP/nn.example@example.com) no 
matter the server login SPN is from that domain or not. 

The proposed fix is to change to use default server login principal (by passing 
null as the 1st parameter to gssManager.createCredential()) instead. This way 
we avoid dependency on HTTP client behavior (Host header or name resolution 
like CNAME) or assumption on the local realm. 


  was:
In KerberosAuthenticationHandler#authenticate, we use canonicalized server name 
derived from HTTP request to build server SPN and authenticate client. This can 
be problematic if the HTTP client/server are running from a non-local Kerberos 
realm that the local realm has trust with (e.g., NN UI).

For example, 
The server is running its HTTP endpoint using SPN from the client realm:
hadoop.http.authentication.kerberos.principal
HTTP/_HOST/TEST.COM

When client sends request to namenode at http://NN1.example.com:50070 from 
client.test@test.com.

The client talks to KDC first and gets a service ticket 
HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
negotiation. 

The authentication will end up with either no valid credential error or 
checksum failure depending on the HTTP client naming resolution or HTTP Host 
field from the request header provided by the browser. 

The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
always return a SPN with local realm (HTTP/nn.example@example.com) no 
matter the server login SPN is from that domain or not. 

The proposed fix is to change to use default server login principle (by passing 
null as the 1st parameter to gssManager.createCredential()) instead. This way 
we avoid dependency on HTTP client behavior (Host header or name resolution 
like CNAME) or assumption on the local realm. 



> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by 

[jira] [Commented] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567090#comment-15567090
 ] 

Hudson commented on HADOOP-13698:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10593 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10593/])
HADOOP-13698. Document caveat for KeyShell when underlying KeyProvider (xiao: 
rev b84c4891f9eca8d56593e48e9df88be42e24220d)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md


> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> 

[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567076#comment-15567076
 ] 

Sangjin Lee commented on HADOOP-12090:
--

Just to be clear, this issue is caused because Mina (the networking stack on 
which ApacheDS depends) does set the send and receive buffer size to 1 KB (see 
DIRSERVER-2074 more detail). If we move away from that behavior either by using 
different libraries, the problem might go away.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-11 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13698:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8. Thanks Andrew for the review!

> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> 

[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-10-11 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567020#comment-15567020
 ] 

Zhe Zhang commented on HADOOP-13558:


Thanks much Xiao!

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13558.01.patch, HADOOP-13558.02.patch, 
> HADOOP-13558.branch-2.7.patch
>
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567012#comment-15567012
 ] 

Chris Nauroth commented on HADOOP-13452:


[~fabbri], thank you for this patch.  I have just one minor nitpick.

{code}
return (fStr.indexOf(aStr) == 0);
{code}

I think this line would be more readable using 
[{{String#startsWith(String)}}|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#startsWith(java.lang.String)].

After that change, I think I'll be ready to commit the patch to the feature 
branch.

> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13452-HADOOP-13345.002.patch, 
> HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13694) Data transfer encryption with AES 192: Invalid key length.

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566969#comment-15566969
 ] 

Hadoop QA commented on HADOOP-13694:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  8m  3s{color} | 
{color:red} root generated 4 new + 8 unchanged - 0 fixed = 12 total (was 8) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 11 unchanged - 11 fixed = 13 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13694 |
| GITHUB PR | https://github.com/apache/hadoop/pull/135 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 5661b529c4f3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dacd3ec |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| cc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10735/artifact/patchprocess/diff-compile-cc-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10735/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10735/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10735/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Data transfer encryption with AES 192: Invalid key length.
> --

[jira] [Commented] (HADOOP-13698) Document caveat for KeyShell when underlying KeyProvider does not delete a key

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566840#comment-15566840
 ] 

Andrew Wang commented on HADOOP-13698:
--

+1 thanks Xiao

> Document caveat for KeyShell when underlying KeyProvider does not delete a key
> --
>
> Key: HADOOP-13698
> URL: https://issues.apache.org/jira/browse/HADOOP-13698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13698.01.patch
>
>
> For cases like:
> {noformat}
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [DuplicateKeyException], message [Key with name "d" already exists in 
> "KeyProvider@5e552a98. Key exists but has been disabled. Use undelete to 
> enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key delete d
> You are about to DELETE all versions of  key d from KeyProvider 
> KMSClientProvider[http://localhost:16000/kms/v1/]. Continue?  (Y or N) Y
> Deleting key: d from KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> d has not been deleted. java.io.IOException: Key named d was already deleted 
> but is disabled. Use purge to destroy all traces or undelete to reactivate.
> java.io.IOException: Key named d was already deleted but is disabled. Use 
> purge to destroy all traces or undelete to reactivate.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.deleteKey(KMSClientProvider.java:877)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$DeleteCommand.execute(KeyShell.java:436)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:538)
> $ hadoop key create d
> d has not been created. java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
> java.io.IOException: HTTP status [500], exception 
> [KeyProvider$DuplicateKeyException], message [Key with name "d" already 
> exists in "KeyProvider@5e552a98. Key exists but has been disabled. Use 
> undelete to enable.] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:615)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:573)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:739)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:747)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:506)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:91)

[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566829#comment-15566829
 ] 

Eric Badger commented on HADOOP-13709:
--

bq. TestShell fails because it does not have the YARN component of the fix from 
YARN-5641. I've manually tested that it passes with that fix.
I was incorrect about this. This patch is completely independent of (though 
required by) YARN-5641. I did manually test that 
TestShell#testShellCommandTimerLeak passes, but the failure may be related to 
this patch. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566733#comment-15566733
 ] 

Eric Badger edited comment on HADOOP-13709 at 10/11/16 10:16 PM:
-

[~andrew.wang], the code's been this way since Mapreduce was put into separate 
projects back in 2009. I put down the affect version as 2.2, but it goes back 
further than that.


was (Author: ebadger):
[~andrew.wang], the code's been this way since Mapreduce was put into separate 
projects back in 2009. So I put down the affect version as 0.22, but it goes 
back further than that.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Affects Version/s: (was: 0.22.0)
   2.2.0

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566764#comment-15566764
 ] 

Hadoop QA commented on HADOOP-13708:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13708 |
| GITHUB PR | https://github.com/apache/hadoop/pull/140 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f14bfc6ae2c6 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a09bf7 |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-archives 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10734/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Ding Fei
>Assignee: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Affects Version/s: 0.22.0
 Target Version/s: 2.7.3

[~andrew.wang], the code's been this way since Mapreduce was put into separate 
projects back in 2009. So I put down the affect version as 0.22, but it goes 
back further than that.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13708:
-
Assignee: Ding Fei

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Ding Fei
>Assignee: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13708:
-
Affects Version/s: 2.8.0
 Target Version/s: 3.0.0-alpha1, 2.8.0
   Status: Patch Available  (was: Open)

Thanks for the patch, I added you as contributor to the Hadoop project and hit 
Submit Patch so the precommit bot runs.

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Ding Fei
>Assignee: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566713#comment-15566713
 ] 

Andrew Wang commented on HADOOP-13709:
--

Hi Eric, do you mind setting affects and target versions for this JIRA?

Overall the change looks good, though since this seems related to YARN I'll let 
someone else +1.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566701#comment-15566701
 ] 

Andrew Wang commented on HADOOP-13700:
--

Thanks for reviewing Eddy. I'll wait until tomorrow to commit since Allen or 
Steve might also want to look.

> Remove unthrown IOException from TrashPolicy#initialize and #getInstance 
> signatures
> ---
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566601#comment-15566601
 ] 

Chris Nauroth commented on HADOOP-13687:


Steve, thank you for your review and +1.  I will hold off committing until end 
of week in case the other participants want further discussion.

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566598#comment-15566598
 ] 

Hudson commented on HADOOP-13705:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10589/])
HADOOP-13705. Revert HADOOP-13534 Remove unused TrashPolicy#getInstance (wang: 
rev 8a09bf7c19d9d2f6d6853d45e11b0d38c7c67f2a)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13705
> URL: https://issues.apache.org/jira/browse/HADOOP-13705
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13705.001.patch
>
>
> Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
> deprecated API, but the 2.x line does not have a release with the new 
> replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initialize code

2016-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566597#comment-15566597
 ] 

Hudson commented on HADOOP-13534:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10589/])
HADOOP-13705. Revert HADOOP-13534 Remove unused TrashPolicy#getInstance (wang: 
rev 8a09bf7c19d9d2f6d6853d45e11b0d38c7c67f2a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


> Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initialize}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-11 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566576#comment-15566576
 ] 

Lei (Eddy) Xu commented on HADOOP-13700:


Thanks [~andrew.wang]. It looks safe to me. +1.

> Remove unthrown IOException from TrashPolicy#initialize and #getInstance 
> signatures
> ---
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13705:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks Akira and Haibo for reviewing, I've pushed this to trunk.

> Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13705
> URL: https://issues.apache.org/jira/browse/HADOOP-13705
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13705.001.patch
>
>
> Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
> deprecated API, but the 2.x line does not have a release with the new 
> replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566540#comment-15566540
 ] 

Hudson commented on HADOOP-13684:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10588 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10588/])
HADOOP-13684. Snappy may complain Hadoop is built without snappy if (weichiu: 
rev 4b32b1420d98ea23460d05ae94f2698109b3d6f7)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java


> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-11 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566539#comment-15566539
 ] 

Haibo Chen commented on HADOOP-13705:
-

Thanks [~andrew.wang] for the patch. This should take care of the issues I 
encountered.

> Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13705
> URL: https://issues.apache.org/jira/browse/HADOOP-13705
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13705.001.patch
>
>
> Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
> deprecated API, but the 2.x line does not have a release with the new 
> replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13684:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed v2 patch to trunk, branch-2 and branch-2.8. Thanks [~xiaochen] for 
reviewing the patch!

> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-10-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566473#comment-15566473
 ] 

Xiaoyu Yao commented on HADOOP-13565:
-

Thanks [~arpitagarwal] for the review. In case other folks on the watcher list 
have additional comments, I will hold off the commit until 10/13.

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principle (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566438#comment-15566438
 ] 

Eric Badger commented on HADOOP-13709:
--

TestShell fails because it does not have the YARN component of the fix from 
YARN-5641. I've manually tested that it passes with that fix. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-10-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566436#comment-15566436
 ] 

Arpit Agarwal commented on HADOOP-13565:


+1

Thanks for tracking this down and the fix [~xyao].

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13565.00.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principle (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13711) Supress CachingGetSpaceUsed from logging interrupted exception stacktrace

2016-10-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-13711.
--
Resolution: Duplicate

Created by error. A duplication of HADOOP-13710

> Supress CachingGetSpaceUsed from logging interrupted exception stacktrace
> -
>
> Key: HADOOP-13711
> URL: https://issues.apache.org/jira/browse/HADOOP-13711
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
>
> CachingGetSpaceUsed thread is typically interrupted when the node is 
> shutdown. Since this is a routine operation, there seems not much value to 
> print the stacktrace of an {{InterruptedException}}.
> {quote}
> 2016-10-11 10:02:25,894 WARN  fs.CachingGetSpaceUsed 
> (CachingGetSpaceUsed.java:run(180)) - Thread Interrupted waiting to refresh 
> disk information
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13710) Supress CachingGetSpaceUsed from logging interrupted exception stacktrace

2016-10-11 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13710:


 Summary: Supress CachingGetSpaceUsed from logging interrupted 
exception stacktrace
 Key: HADOOP-13710
 URL: https://issues.apache.org/jira/browse/HADOOP-13710
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.0
Reporter: Wei-Chiu Chuang
Priority: Minor


CachingGetSpaceUsed thread is typically interrupted when the node is shutdown. 
Since this is a routine operation, there seems not much value to print the 
stacktrace of an {{InterruptedException}}.
{quote}
2016-10-11 10:02:25,894 WARN  fs.CachingGetSpaceUsed 
(CachingGetSpaceUsed.java:run(180)) - Thread Interrupted waiting to refresh 
disk information
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
at java.lang.Thread.run(Thread.java:745)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13711) Supress CachingGetSpaceUsed from logging interrupted exception stacktrace

2016-10-11 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13711:


 Summary: Supress CachingGetSpaceUsed from logging interrupted 
exception stacktrace
 Key: HADOOP-13711
 URL: https://issues.apache.org/jira/browse/HADOOP-13711
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.0
Reporter: Wei-Chiu Chuang
Priority: Minor


CachingGetSpaceUsed thread is typically interrupted when the node is shutdown. 
Since this is a routine operation, there seems not much value to print the 
stacktrace of an {{InterruptedException}}.
{quote}
2016-10-11 10:02:25,894 WARN  fs.CachingGetSpaceUsed 
(CachingGetSpaceUsed.java:run(180)) - Thread Interrupted waiting to refresh 
disk information
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
at java.lang.Thread.run(Thread.java:745)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566348#comment-15566348
 ] 

Hadoop QA commented on HADOOP-13709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 55 unchanged - 0 fixed = 57 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.util.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832727/HADOOP-13709.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 492759b14e18 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2fb392a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10733/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10733/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10733/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10733/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> 

[jira] [Commented] (HADOOP-13684) Snappy may complain Hadoop is built without snappy if libhadoop is not found.

2016-10-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566327#comment-15566327
 ] 

Wei-Chiu Chuang commented on HADOOP-13684:
--

Committing this based on [~xiaochen]'s +1.

> Snappy may complain Hadoop is built without snappy if libhadoop is not found.
> -
>
> Key: HADOOP-13684
> URL: https://issues.apache.org/jira/browse/HADOOP-13684
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13684.001.patch, HADOOP-13684.002.patch
>
>
> If for some reason libhadoop can not be found/loaded, Snappy complains Hadoop 
> is not built with Snappy but it actually is.
> {code:title=SnappyCodec.java}
> public static void checkNativeCodeLoaded() {
>   if (!NativeCodeLoader.isNativeCodeLoaded() ||
>   !NativeCodeLoader.buildSupportsSnappy()) {
> throw new RuntimeException("native snappy library not available: " +
> "this version of libhadoop was built without " +
> "snappy support.");
>   }
> {code}
> This case may happen with MAPREDUCE-6577.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566320#comment-15566320
 ] 

Chris Nauroth commented on HADOOP-13560:


[~ste...@apache.org], I reviewed HADOOP-13560-branch-2-011.patch attached to 
HADOOP-13703.  This version looks good to me.  My only remaining request is to 
remove an unused import in {{ITestS3AHugeFilesByteBufferBlocks}}, and that's 
trivial enough to fix on commit.  We also need another patch file for trunk.  
After that, I'll be ready to sign off.

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566317#comment-15566317
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 75 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 42 new + 2541 
unchanged - 43 fixed = 2583 total (was 2584) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 582 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
17s{color} | {color:red} The patch 4277 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
32s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-maven-plugins generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-auth-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} 

[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566232#comment-15566232
 ] 

Hudson commented on HADOOP-13697:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10587 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10587/])
HADOOP-13697. LogLevel#main should not throw exception if no arguments. 
(liuml07: rev 2fb392a587d288b628936ca6d18fabad04afc585)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java


> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13697.000.patch, HADOOP-13697.001.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.001.patch

Attaching patch with the hadoop-common portion of the fix. YARN-5641 includes 
the YARN portion of the fix. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Status: Patch Available  (was: Open)

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566194#comment-15566194
 ] 

Xiaoyu Yao commented on HADOOP-13686:
-

Thanks [~cheersyang] for the update. patch v3 looks good to me. Two remaining 
issues:
1. We need leave the change of TestHDFSTrash.java in HDFS-10922.
2. Address the Jenkins failures in testTrashPermission.

bq. 
I don't think HDFS-10922 will use AuditableTrashPolicy/AuditableCheckpoints, 
they are helper classes to verify trash intervals in testTrashRestarts, I can't 
see how to reuse it in HDFS trash tests. 

You are right. I proposed to reuse AuditableTrashPolicy/AuditableCheckpoints 
because the patch v06 in HDFS-10922 has duplicated code at the time when I 
review this one. Now that you've updated HDFS-10922. We don't need to address 
#4 now.  

bq. Regarding to #5, I used static AuditableCheckpoints and static vars, 
because I need to share checkpoint states between multiple instances of trash 
policies while simulating restart, I used atom integer to avoid thread safety 
problem.

Make sense to me.

> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HADOOP-13686.01.patch, HADOOP-13686.02.patch, 
> HADOOP-13686.03.patch
>
>
> This ticket is opened to track adding the forllowing unit test in 
> hadoop-common. 
> #test users can delete their own trash directory
> #test users can delete an empty directory and the directory is moved to trash
> #test fs.trash.interval with invalid values such as 0 or negative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13697:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} and {{branch-2}}. Thanks for your review, [~jojochuang].

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13697.000.patch, HADOOP-13697.001.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13709:


 Summary: Clean up subprocesses spawned by Shell.java:runCommand 
when the shell process exits
 Key: HADOOP-13709
 URL: https://issues.apache.org/jira/browse/HADOOP-13709
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


The runCommand code in Shell.java can get into a situation where it will ignore 
InterruptedExceptions and refuse to shutdown due to being in I/O waiting for 
the return value of the subprocess that was spawned. We need to allow for the 
subprocess to be interrupted and killed when the shell process gets killed. 
Currently the JVM will shutdown and all of the subprocesses will be orphaned 
and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566170#comment-15566170
 ] 

Hadoop QA commented on HADOOP-13686:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} root: The patch generated 0 new + 78 unchanged - 1 
fixed = 78 total (was 79) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
22s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832685/HADOOP-13686.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7dbfce05169c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ecb51b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10732/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10732/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10732/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566145#comment-15566145
 ] 

Chris Nauroth commented on HADOOP-13502:


The warnings flagged by pre-commit for the trunk patch are not relevant.

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566119#comment-15566119
 ] 

ASF GitHub Bot commented on HADOOP-13708:
-

GitHub user danix800 opened a pull request:

https://github.com/apache/hadoop/pull/140

HADOOP-13708. Fix a few typos in site *.md documents

Fix several typos in site *.md documents.  

Touched documents listed:
* hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
* hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
* hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/danix800/hadoop HADOOP-13708

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/140.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #140


commit 0b4e7237c3e943bea414199286333ce458541ee6
Author: Ding Fei 
Date:   2016-10-11T18:03:06Z

HADOOP-13708. Fix a few typos in site *.md documents




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
* hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:
Fix several typos in site *.md documents. 

Touched documents listed:
*hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
* hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
* hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
* hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:
Fix several typos in site *.md documents. 

Touched documents listed:
* hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> * 
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
  *hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:
Fix several typos in site *.md documents. 

Touched documents listed:
  hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
>   *hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
>   *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
*hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:
Fix several typos in site *.md documents. 

Touched documents listed:
  *hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
*hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> *hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
>   
> *hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
>   *hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
  hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
  
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
  
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
  hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
  
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
  hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
  hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:
Fix several typos in site *.md documents. 

Touched documents listed:
hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md




> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
>   hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
>   hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
>   
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
>   
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
>   hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
>   
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
>   hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
>   
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
>   hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
>   hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Attachment: HADOOP-13708.patch

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
> Attachments: HADOOP-13708.patch
>
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Component/s: documentation

> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ding Fei
>Priority: Minor
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566072#comment-15566072
 ] 

Andrew Wang commented on HADOOP-13700:
--

[~haibochen] thanks for reviewing, see HADOOP-13705 for the Path parameter, 
simple revert.

> Remove unthrown IOException from TrashPolicy#initialize and #getInstance 
> signatures
> ---
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-11 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566059#comment-15566059
 ] 

Haibo Chen commented on HADOOP-13700:
-

Thanks [~andrew.wang] for the fix. This solve most of the pain to user the new 
API. What about the path parameter that was in the parameter list before 3.0?

> Remove unthrown IOException from TrashPolicy#initialize and #getInstance 
> signatures
> ---
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Fei updated HADOOP-13708:
--
Description: 
Fix several typos in site *.md documents. 

Touched documents listed:
hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



  was:Fix several typos in site *.md documents.


> Fix a few typos in site *.md documents
> --
>
> Key: HADOOP-13708
> URL: https://issues.apache.org/jira/browse/HADOOP-13708
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ding Fei
>Priority: Minor
>
> Fix several typos in site *.md documents. 
> Touched documents listed:
> hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
> hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md
> hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
> hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
> hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13708) Fix a few typos in site *.md documents

2016-10-11 Thread Ding Fei (JIRA)
Ding Fei created HADOOP-13708:
-

 Summary: Fix a few typos in site *.md documents
 Key: HADOOP-13708
 URL: https://issues.apache.org/jira/browse/HADOOP-13708
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ding Fei
Priority: Minor


Fix several typos in site *.md documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency

2016-10-11 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-11487:
---

Assignee: John Zhuge

> FileNotFound on distcp to s3n/s3a due to creation inconsistency 
> 
>
> Key: HADOOP-11487
> URL: https://issues.apache.org/jira/browse/HADOOP-11487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Affects Versions: 2.7.2
>Reporter: Paulo Motta
>Assignee: John Zhuge
>
> I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm 
> getting the following exception:
> {code:java}
> 2015-01-16 20:53:18,187 ERROR [main] 
> org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
> hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz
> java.io.FileNotFoundException: No such file or directory 
> 's3n://s3-bucket/file.gz'
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.FileNotFoundException: No such file or 
> directory 's3n://s3-bucket/file.gz'
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> {code}
> However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. 
> So probably due to Amazon's S3 eventual consistency the job failure.
> In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus 
> must use fs.s3.maxRetries property in order to avoid failures like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-11 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13686:
-
Attachment: HADOOP-13686.03.patch

> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HADOOP-13686.01.patch, HADOOP-13686.02.patch, 
> HADOOP-13686.03.patch
>
>
> This ticket is opened to track adding the forllowing unit test in 
> hadoop-common. 
> #test users can delete their own trash directory
> #test users can delete an empty directory and the directory is moved to trash
> #test fs.trash.interval with invalid values such as 0 or negative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565730#comment-15565730
 ] 

Yuanbo Liu edited comment on HADOOP-13707 at 10/11/16 3:27 PM:
---

[~aw] Thanks for your response.
Non-admin users shouldn't be looking at it in security environment. But if HTTP 
SPNEGO is not enabled, that is to say, in non-security environment for http 
sever, users cannot be authenticated and passed to NameNode, and "/logs" should 
be accessed by all users.

{quote}
It's probably also worth pointing out that these logs are typically huge...
{quote}
Agree with you. I think the biggest feature of "/logs" is to provide urls to 
download logs. Browsing logs online shouldn't be encouraged.


was (Author: yuanbo):
[~aw] Thanks for your response.
Non-admin users shouldn't be looking at it in security environment. But if HTTP 
SPNEGO is not enabled, that is to say, in non-security environment for http 
sever, users cannot be authenticated and passed to NameNode, and "/logs" should 
be accessed by all users.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{hadoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565730#comment-15565730
 ] 

Yuanbo Liu commented on HADOOP-13707:
-

[~aw] Thanks for your response.
Non-admin users shouldn't be looking at it in security environment. But if HTTP 
SPNEGO is not enabled, that is to say, in non-security environment for http 
sever, users cannot be authenticated and passed to NameNode, and "/logs" should 
be accessed by all users.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{hadoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565710#comment-15565710
 ] 

Wei-Chiu Chuang commented on HADOOP-13697:
--

Thanks for submitting the new patch. The v1 patch looks good to me.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch, HADOOP-13697.001.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565604#comment-15565604
 ] 

Allen Wittenauer edited comment on HADOOP-13707 at 10/11/16 2:41 PM:
-

/logs was specifically blocked way back when due to the sensitive nature of the 
content. Non-admin users shouldn't be looking at it at all and admin users have 
access from the shell.

It's probably also worth pointing out that these logs are typically huge and 
viewing them in a browser is a pretty terrible experience.


was (Author: aw):
/logs was specifically blocked way back when due to the sensitive nature of the 
content. Non-admin users shouldn't be looking at it at all and admin users have 
access from the shell.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{hadoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565604#comment-15565604
 ] 

Allen Wittenauer commented on HADOOP-13707:
---

/logs was specifically blocked way back when due to the sensitive nature of the 
content. Non-admin users shouldn't be looking at it at all and admin users have 
access from the shell.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{hadoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565350#comment-15565350
 ] 

Kai Sasaki commented on HADOOP-13061:
-

[~drankye] Thank you so much for checking! Build seems to be failed due to 
missing configuration keys in {{CodecUtil}}. Could you check it?

And one question.
{code}
 //TODO:rs-legacy should be handled differently.
{code}
Does it mean the issue which will be solved in HADOOP-13685?

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, 
> HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, 
> HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, 
> HADOOP-13061.15.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-11798:
---
Attachment: HADOOP-11798-v2.patch

1. Patch rebase
2. Add test cases for native XOR codec

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565125#comment-15565125
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-13397 at 10/11/16 10:43 AM:


> The OS image contains minimally glibc and various other GPL components.

In my understanding, glibc and GPL components are distributed via another tar 
ball(base image). If we create docker image for Hadoop, it only includes 
diff(layer.tar). A base image including GPL files(e.g. debian:jessie image) 
seems to be not included.

I played with golang:1.6-onbuild image to confirm with this script: 
https://raw.githubusercontent.com/docker/docker/master/contrib/download-frozen-image-v2.sh
 :
{quote}
$ ./download-frozen-image-v2.sh /tmp/b golang:1.6-onbuild 
Downloading 'library/golang:1.6-onbuild@1.6-onbuild' (20 layers)...
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%

Download of images into '/tmp/b' complete.

$ cd /tmp/b/870716423ca9b2b721debfc28c19e54b50245839fef2f364d38d3452f77515ed
$ tar xvf layer.tar 
x usr/
x usr/local/
x usr/local/bin/
x usr/local/bin/go-wrapper
{quote}

Please point me if I have missing points.



was (Author: ozawa):
> The OS image contains minimally glibc and various other GPL components.

In my understanding, glibc and GPL components are distributed via another tar 
ball(base image). If we create docker image for Hadoop, base image including 
GPL files seems to be not included.
I played with golang:1.6-onbuild image to confirm with this script: 
https://raw.githubusercontent.com/docker/docker/master/contrib/download-frozen-image-v2.sh
 :
{quote}
$ ./download-frozen-image-v2.sh /tmp/b golang:1.6-onbuild 
Downloading 'library/golang:1.6-onbuild@1.6-onbuild' (20 layers)...
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%

[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-10-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565125#comment-15565125
 ] 

Tsuyoshi Ozawa commented on HADOOP-13397:
-

> The OS image contains minimally glibc and various other GPL components.

In my understanding, glibc and GPL components are distributed via another tar 
ball(base image). If we create docker image for Hadoop, base image including 
GPL files seems to be not included.
I played with golang:1.6-onbuild image to confirm with this script: 
https://raw.githubusercontent.com/docker/docker/master/contrib/download-frozen-image-v2.sh
 :
{quote}
$ ./download-frozen-image-v2.sh /tmp/b golang:1.6-onbuild 
Downloading 'library/golang:1.6-onbuild@1.6-onbuild' (20 layers)...
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%
\\ 
100.0%

Download of images into '/tmp/b' complete.

$ cd /tmp/b/870716423ca9b2b721debfc28c19e54b50245839fef2f364d38d3452f77515ed
$ tar xvf layer.tar 
x usr/
x usr/local/
x usr/local/bin/
x usr/local/bin/go-wrapper
{quote}

If we push docker image, it only includes diff: this is what I mentioned.



> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-11 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565100#comment-15565100
 ] 

SammiChen commented on HADOOP-11798:


Hi Andrew and Kai, I created JIRA HDFS-10994 to export XOR EC policy in "hdfs 
erasurecode" command. And will working on it. Thanks!




> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Description: 
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{hadoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{hadoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.

  was:
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{hadoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.


> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{hadoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Description: 
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{hadoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.

  was:
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{adoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.


> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{adoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{hadoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Labels: security  (was: )

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{adoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{adoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565076#comment-15565076
 ] 

Hadoop QA commented on HADOOP-13061:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
57s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 57s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} root: The patch generated 0 new + 194 unchanged - 24 
fixed = 194 total (was 218) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832628/HADOOP-13061.15.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8c9888ce93bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b1266 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 

[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Attachment: HADOOP-13707.001.patch

I've prepared a patch without any test case. I'd like to get community's 
thoughts before I complete it.
Any response will be appreciated, thanks in advance.

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13707.001.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{adoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{adoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HADOOP-13707:
---

 Summary: If kerberos is enabled while HTTP SPNEGO is not 
configured, some links cannot be accessed
 Key: HADOOP-13707
 URL: https://issues.apache.org/jira/browse/HADOOP-13707
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yuanbo Liu


In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote]

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{adoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-10-11 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13707:

Description: 
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote}

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{adoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.

  was:
In {{HttpServer2#hasAdministratorAccess}}, it uses 
`hadoop.security.authorization` to detect whether HTTP is authenticated.
It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
such as "/logs", and it will return error message as below:
{quote}
HTTP ERROR 403
Problem accessing /logs/. Reason:
User dr.who is unauthorized to access this page.
{quote]

We should use {{adoop.http.authentication.type}} instead of 
{{hadoop.security.authorization}} to detect whether HTTP authentication is 
enabled, if the value of  {{adoop.http.authentication.type}}  equals `simple`, 
anybody has administrator access.


> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should use {{adoop.http.authentication.type}} instead of 
> {{hadoop.security.authorization}} to detect whether HTTP authentication is 
> enabled, if the value of  {{adoop.http.authentication.type}}  equals 
> `simple`, anybody has administrator access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565039#comment-15565039
 ] 

Hadoop QA commented on HADOOP-13686:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 75 unchanged - 1 fixed = 75 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832627/HADOOP-13686.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 25f6325b0db7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b1266 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10730/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10730/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10730/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>

  1   2   >