[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190574#comment-15190574
 ] 

Hudson commented on HADOOP-12672:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9453 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9453/])
HADOOP-12672. RPC timeout should not override IPC ping interval (iwasakims: rev 
682adc6ba9db3bed94fd4ea3d83761db6abfe695)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12908) Make JvmPauseMonitor a singleton

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190568#comment-15190568
 ] 

Hadoop QA commented on HADOOP-12908:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} root: patch generated 0 new + 553 unchanged - 4 
fixed = 553 total (was 557) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 4s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s 
{color} | {color:green} hadoop-hdfs-nfs in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed with 
JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 44s 

[jira] [Updated] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12672:
--
   Resolution: Fixed
Fix Version/s: 2.6.5
   2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.6 and above. Thanks. 

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190451#comment-15190451
 ] 

Hadoop QA commented on HADOOP-12672:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 12s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792608/HADOOP-12672.004.patch
 |
| JIRA Issue | HADOOP-12672 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux cd598eb45e8a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-12916:
---

 Summary: Allow different Hadoop IPC Call Queue throttling policies 
with FCQ/BackOff
 Key: HADOOP-12916
 URL: https://issues.apache.org/jira/browse/HADOOP-12916
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Currently back off policy from HADOOP-10597 is hard coded to base on whether 
call queue is full. This ticket is open to allow flexible back off policies 
such as moving average of response time in RPC calls of different priorities. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-11404:
-
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and trunk.

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-11404.001.patch, HADOOP-11404.002.patch, 
> HADOOP-11404.003.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190230#comment-15190230
 ] 

Arpit Agarwal commented on HADOOP-12672:


+1 for the v4 patch. 

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12672:
--
Attachment: HADOOP-12672.004.patch

Thanks, [~arpitagarwal]. I needed trivial rebasing. Attaching rebased patch.

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189621#comment-15189621
 ] 

Chris Nauroth edited comment on HADOOP-12910 at 3/10/16 11:48 PM:
--

I am sensing massive scope creep in this discussion.

bq. Actually, one more thing to define in HDFS-9924 and include any 
specification is: linearlizability/serializability guarantees

I'm going to repeat some of my comments from HDFS-9924.  A big motivation for 
this effort is that we often see an application needs to execute a large set of 
renames, where the application has knowledge that there is no dependency 
between the rename operations and no ordering requirements.  Although 
linearizability is certainly nicer to have than not have, use cases like this 
don't need linearizability.

Implementing a linearizability guarantee would significantly complicate this 
effort.  ZooKeeper has an async API with ordering guarantees, and it takes a 
very delicate coordination between client-side and server-side state to make 
that happen.  Instead, I suggest that we focus on what we really need (async 
execution of independent operations) and tell clients that they have 
responsibility to coordinate dependencies between calls.  I also have commented 
on HDFS-9924 that we could later provide a programming model of futures + 
promises as a more elegant way to help callers structure code with multiple 
dependent async calls.  Even that much is not an immediate need though.

This does not preclude providing a linearizability guarantee at some point in 
the future.  I'm just saying that we have an opportunity to provide something 
valuable sooner even without linearizability.

bq. I'm going to be ruthless and say "I'd like to see a specification of this 
alongside the existing one". Because that one has succeeded in being a 
reference point for everyone; we need to continue that for a key binding. It 
should be straightforward here.

Assuming the above project plan is acceptable (no linearizability right now), 
this reduces to a simple statement like "individual async operations adhere to 
the same contract as the corresponding sync operations, and there are no 
guarantees on ordering across multiple async operations."

bq. Is it the future that raises an IOE, or the operation? I can see both 
needing to

Certainly Hadoop-specific exceptions like {{AccessControlException}} and 
{{QuotaExceededException}} must dispatch asynchronously, such as wrapped in an 
{{ExecutionException}}.  You won't know if you're going to hit one of these at 
the time of submitting the call.  My opinion is that if the API is truly async, 
then it implies we cannot perform I/O on the calling thread, and therefore 
cannot throw an {{IOException}} at call time.  I believe Nicholas wants to put 
{{throws IOException}} in the method signatures anyway for ease of 
backwards-compatible changes in the future though, just in case we find a need 
later.  I think that's acceptable.



was (Author: cnauroth):
I am sensing massive scope creep in this discussion.

bq. Actually, one more thing to define in HDFS-9924 and include any 
specification is: linearlizability/serializability guarantees

I'm going to repeat some of my comments from HDFS-9924.  A big motivation for 
this effort is that we often see an application needs to execute a large set of 
renames, where the application has knowledge that there is no dependency 
between the rename operations and no ordering requirements.  Although 
linearizability is certainly nicer to have than not have, use cases like this 
don't need linearizability.

Implementing a linearizability guarantee would significantly complicate this 
effort.  ZooKeeper has an async API with ordering guarantees, and it takes a 
very delicate coordination between client-side and server-side state to make 
that happen.  Instead, I suggest that we focus on what we really need (async 
execution of independent operations) and tell clients that they have 
responsibility to coordinate dependencies between calls.  I also have commented 
on HDFS-9924 that we could later providing a programming model of futures + 
promises as a more elegant way to help callers structure code with multiple 
dependent async calls.  Even that much is not an immediate need though.

This does not preclude providing a linearizability guarantee at some point in 
the future.  I'm just saying that we have an opportunity to provide something 
valuable sooner even without linearizability.

bq. I'm going to be ruthless and say "I'd like to see a specification of this 
alongside the existing one". Because that one has succeeded in being a 
reference point for everyone; we need to continue that for a key binding. It 
should be straightforward here.

Assuming the above project plan is acceptable (no linearizability right now), 
this reduces to a simple statement like "individual 

[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-10 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190172#comment-15190172
 ] 

Siddharth Seth commented on HADOOP-12909:
-

There are potential problems with supporting client side calls without fixing 
the server side - the main one being that all handler threads on the server can 
end up getting blocked. Of course, the same would happen if the client app were 
to create it's own threads and make remote calls (FileSystem for instance).
The future based approach mentioned here and other related jiras ends up 
simplifying client code; however frameworks need to be aware of the potential 
affect on the server.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12915) shelldocs and releasedocmaker build steps do not work correctly on Windows.

2016-03-10 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12915:
--

 Summary: shelldocs and releasedocmaker build steps do not work 
correctly on Windows.
 Key: HADOOP-12915
 URL: https://issues.apache.org/jira/browse/HADOOP-12915
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Chris Nauroth


In the pom.xml files, the calls to shelldocs and releasedocmaker use the 
exec-maven-plugin to invoke the scripts directly.  On *nix, this works fine in 
cooperation with the bang lines in the scripts that tell it the script 
interpreter to use.  The bang lines don't work on Windows though.  Instead, 
exec-maven-plugin needs to specify bash as the executable and pass the script 
as the first argument.  Beyond that, there seem to be further 
as-yet-undiagnosed compatibility issues within the scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Support asynchronous RPC calls

2016-03-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190132#comment-15190132
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12909:
--

Thanks, [~vinodkv] and [~sseth].  HADOOP-11552 looks like a useful server side 
improvement.

> Support asynchronous RPC calls
> --
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12909:
-
Summary: Change ipc.Client to support asynchronous calls  (was: Support 
asynchronous RPC calls)

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.

2016-03-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12899:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk.  I filed HADOOP-12915 for follow-up on 
shelldocs and releasedocmaker.  Thank you, Andrew and Allen.

> External distribution stitching scripts do not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12899.001.patch, HADOOP-12899.002.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching and dist-tar-stitching 
> scripts out of hadoop-dist/pom.xml and into external files.  It appears this 
> change is not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190117#comment-15190117
 ] 

Arpit Agarwal commented on HADOOP-12672:


Sorry I missed your updated patch [~iwasakims]. +1 lgtm.

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190081#comment-15190081
 ] 

James Clampffer commented on HADOOP-12910:
--

bq. Can we also have a close method that uses reference counting on this FS 
object? Lack of reference counting in FileSystem#close is a major usability 
problem.

I strongly agree with this.  Based on my work on HDFS-8707 I think formalizing 
object lifetime dependencies in a specification early on will make things 
significantly easier to implement and maintain.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Support asynchronous RPC calls

2016-03-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12909:
-
Description: 
In ipc.Client, the underlying mechanism is already supporting asynchronous 
calls -- the calls shares a connection, the call requests are sent using a 
thread pool and the responses can be out of order.  Indeed, synchronous call is 
implemented by invoking wait() in the caller thread in order to wait for the 
server response.

In this JIRA, we change ipc.Client to support asynchronous mode.  In 
asynchronous mode, it return once the request has been sent out but not wait 
for the response from the server.

  was:
In ipc.Client, the underlying mechanism is already supporting asynchronous 
calls -- the calls shares a connection, the call requests are sent using a 
thread pool and the responses can be out of order.  Indeed, synchronized call 
is implemented by invoking wait() in the caller thread in order to wait for the 
server response.

In this JIRA, we change ipc.Client to support asynchronous mode.  In 
asynchronous mode, it return once the request has been sent out but not wait 
for the response from the server.


> Support asynchronous RPC calls
> --
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189973#comment-15189973
 ] 

John Zhuge commented on HADOOP-12855:
-

[~steve_l], please compare with the patch for HADOOP-12908: 
https://issues.apache.org/jira/secure/attachment/12792590/HADOOP-12908-002.patch.
 Thanks.

> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, test
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
> Attachments: HADOOP-12855-001.patch, HADOOP-12855-002.patch, 
> HADOOP-12855-003.patch, HADOOP-12855-004.patch
>
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12908) Make JvmPauseMonitor a singleton

2016-03-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12908:

Status: Patch Available  (was: Open)

> Make JvmPauseMonitor a singleton
> 
>
> Key: HADOOP-12908
> URL: https://issues.apache.org/jira/browse/HADOOP-12908
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12908-001.patch, HADOOP-12908-002.patch
>
>
> Make JvmPauseMonitor a singleton just as JvmMetrics because there is no use 
> case to run multiple instances per JVM. No need for 
> {{TestMetrics$setPauseMonitor}} any more. Initialization code can be 
> simplified.
> For example, this code segment
> {noformat}
> pauseMonitor = new JvmPauseMonitor();
> addService(pauseMonitor);
> jm.setPauseMonitor(pauseMonitor);
> {noformat}
> becomes
> {noformat}
> addService(JvmPauseMonitor.INSTANCE);
> {noformat}
> And this code segment
> {noformat}
>   pauseMonitor = new JvmPauseMonitor();
>   pauseMonitor.init(config);
>   pauseMonitor.start();
>   metrics.getJvmMetrics().setPauseMonitor(pauseMonitor);
> {noformat}
> becomes
> {noformat}
>   pauseMonitor.INSTANCE.init(config);
>   pauseMonitor.INSTANCE.start();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12908) Make JvmPauseMonitor a singleton

2016-03-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12908:

Attachment: HADOOP-12908-002.patch

Patch 002:
* Fix checkstyle and findbugs issues

> Make JvmPauseMonitor a singleton
> 
>
> Key: HADOOP-12908
> URL: https://issues.apache.org/jira/browse/HADOOP-12908
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12908-001.patch, HADOOP-12908-002.patch
>
>
> Make JvmPauseMonitor a singleton just as JvmMetrics because there is no use 
> case to run multiple instances per JVM. No need for 
> {{TestMetrics$setPauseMonitor}} any more. Initialization code can be 
> simplified.
> For example, this code segment
> {noformat}
> pauseMonitor = new JvmPauseMonitor();
> addService(pauseMonitor);
> jm.setPauseMonitor(pauseMonitor);
> {noformat}
> becomes
> {noformat}
> addService(JvmPauseMonitor.INSTANCE);
> {noformat}
> And this code segment
> {noformat}
>   pauseMonitor = new JvmPauseMonitor();
>   pauseMonitor.init(config);
>   pauseMonitor.start();
>   metrics.getJvmMetrics().setPauseMonitor(pauseMonitor);
> {noformat}
> becomes
> {noformat}
>   pauseMonitor.INSTANCE.init(config);
>   pauseMonitor.INSTANCE.start();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12906) AuthenticatedURL should convert a 404/Not Found into an FileNotFoundException.

2016-03-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189880#comment-15189880
 ] 

Hudson commented on HADOOP-12906:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9449 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9449/])
HADOOP-12906. AuthenticatedURL should convert a 404/Not Found into an 
(gtcarrera9: rev 9a79b738c582bd84727831987b845535625d75fe)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java


> AuthenticatedURL should convert a 404/Not Found into an 
> FileNotFoundException. 
> ---
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189837#comment-15189837
 ] 

Allen Wittenauer commented on HADOOP-12857:
---


Manual run without unit tests:

| Vote |  Subsystem |  Runtime   | Comment

|   0  |reexec  |  0m 21s| Docker mode activated. 
|   0  | shelldocs  |  0m 4s | Shelldocs was not available. 
|  +1  |   @author  |  0m 0s | The patch does not contain any @author 
|  ||| tags.
|  +1  |test4tests  |  0m 0s | The patch appears to include 6 new or 
|  ||| modified test files.
|   0  |mvndep  |  3m 48s| Maven dependency ordering for branch 
|  +1  |mvninstall  |  11m 42s   | trunk passed 
|  +1  |   compile  |  13m 16s   | trunk passed 
|  +1  |   mvnsite  |  12m 55s   | trunk passed 
|  +1  |mvneclipse  |  2m 37s| trunk passed 
|  +1  |   javadoc  |  10m 23s   | trunk passed 
|   0  |mvndep  |  0m 20s| Maven dependency ordering for patch 
|  +1  |mvninstall  |  23m 27s   | the patch passed 
|  +1  |   compile  |  11m 19s   | the patch passed 
|  +1  | javac  |  11m 19s   | the patch passed 
|  +1  |   mvnsite  |  13m 4s| the patch passed 
|  +1  |mvneclipse  |  0m 57s| the patch passed 
|  +1  |shellcheck  |  0m 7s | The applied patch generated 0 new + 94 
|  ||| unchanged - 5 fixed = 94 total (was 99)
|  +1  |whitespace  |  0m 0s | Patch has no whitespace issues. 
|  +1  |   xml  |  0m 5s | The patch has no ill-formed XML file. 
|  +1  |   javadoc  |  10m 53s   | the patch passed 
|  +1  |asflicense  |  0m 24s| Patch does not generate ASF License 
|  ||| warnings.
|  ||  116m 18s  | 

Manually running unit tests:
{code}
 hadoop-common-project/hadoop-common$ mvn test -DskipTests -Pshelltest

[INFO] --- maven-antrun-plugin:1.7:run (common-test-bats-driver) @ 
hadoop-common ---
[INFO] Executing tasks

main:
 [exec] Running bats -t hadoop_add_classpath.bats
 [exec] 1..11
 [exec] ok 1 hadoop_add_classpath (simple not exist)
 [exec] ok 2 hadoop_add_classpath (simple wildcard not exist)
 [exec] ok 3 hadoop_add_classpath (simple exist)
 [exec] ok 4 hadoop_add_classpath (simple wildcard exist)
 [exec] ok 5 hadoop_add_classpath (simple dupecheck)
 [exec] ok 6 hadoop_add_classpath (default order)
 [exec] ok 7 hadoop_add_classpath (after order)
 [exec] ok 8 hadoop_add_classpath (before order)
 [exec] ok 9 hadoop_add_classpath (simple dupecheck 2)
 [exec] ok 10 hadoop_add_classpath (dupecheck 3)
 [exec] ok 11 hadoop_add_classpath (complex ordering)
 [exec] Running bats -t hadoop_add_colonpath.bats
 [exec] 1..9
 [exec] ok 1 hadoop_add_colonpath (simple not exist)
 [exec] ok 2 hadoop_add_colonpath (simple exist)
 [exec] ok 3 hadoop_add_colonpath (simple dupecheck)
 [exec] ok 4 hadoop_add_colonpath (default order)
 [exec] ok 5 hadoop_add_colonpath (after order)
 [exec] ok 6 hadoop_add_colonpath (before order)
 [exec] ok 7 hadoop_add_colonpath (simple dupecheck 2)
 [exec] ok 8 hadoop_add_colonpath (dupecheck 3)
 [exec] ok 9 hadoop_add_colonpath (complex ordering)
 [exec] Running bats -t hadoop_add_common_to_classpath.bats
 [exec] 1..3
 [exec] ok 1 hadoop_add_common_to_classpath (negative)
 [exec] ok 2 hadoop_add_common_to_classpath (positive)
 [exec] ok 3 hadoop_add_common_to_classpath (build paths)
 [exec] Running bats -t hadoop_add_javalibpath.bats
 [exec] 1..9
 [exec] ok 1 hadoop_add_javalibpath (simple not exist)
 [exec] ok 2 hadoop_add_javalibpath (simple exist)
 [exec] ok 3 hadoop_add_javalibpath (simple dupecheck)
 [exec] ok 4 hadoop_add_javalibpath (default order)
 [exec] ok 5 hadoop_add_javalibpath (after order)
 [exec] ok 6 hadoop_add_javalibpath (before order)
 [exec] ok 7 hadoop_add_javalibpath (simple dupecheck 2)
 [exec] ok 8 hadoop_add_javalibpath (dupecheck 3)
 [exec] ok 9 hadoop_add_javalibpath (complex ordering)
 [exec] Running bats -t hadoop_add_ldlibpath.bats
 [exec] 1..9
 [exec] ok 1 hadoop_add_ldlibpath (simple not exist)
 [exec] ok 2 hadoop_add_ldlibpath (simple exist)
 [exec] ok 3 hadoop_add_ldlibpath (simple dupecheck)
 [exec] ok 4 hadoop_add_ldlibpath (default order)
 [exec] ok 5 hadoop_add_ldlibpath (after order)
 [exec] ok 6 hadoop_add_ldlibpath (before order)
 [exec] ok 7 hadoop_add_ldlibpath (simple dupecheck 2)
 [exec] ok 8 hadoop_add_ldlibpath (dupecheck 3)
 [exec] ok 9 hadoop_add_ldlibpath (complex ordering)
 [exec] Running bats -t hadoop_add_param.bats
 [exec] 1..4
  

[jira] [Updated] (HADOOP-12906) AuthenticatedURL should convert a 404/Not Found into an FileNotFoundException.

2016-03-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12906:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

I committed this patch into trunk and branch-2. Thanks [~ste...@apache.org] for 
the work and [~liuml07] for the quick review! Given the fact that this patch is 
small, I'm also fine with cherry-picking it to branch-2.8. 

> AuthenticatedURL should convert a 404/Not Found into an 
> FileNotFoundException. 
> ---
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12906) AuthenticatedURL should convert a 404/Not Found into an FileNotFoundException.

2016-03-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12906:
---
Summary: AuthenticatedURL should convert a 404/Not Found into an 
FileNotFoundException.   (was: AuthenticatedURL translates a 404/Not Found into 
an AuthenticationException. It isn't)

> AuthenticatedURL should convert a 404/Not Found into an 
> FileNotFoundException. 
> ---
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12906) AuthenticatedURL translates a 404/Not Found into an AuthenticationException. It isn't

2016-03-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189817#comment-15189817
 ] 

Li Lu commented on HADOOP-12906:


Patch LGTM. +1. Will commit shortly. 

> AuthenticatedURL translates a 404/Not Found into an AuthenticationException. 
> It isn't
> -
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Release Note: 

* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  
* TOOL\_PATH / HADOOP\_TOOLS\_PATH has been broken apart and replaced with 
HADOOP\_TOOLS\_HOME, HADOOP\_TOOLS\_DIR and HADOOP\_TOOLS\_LIB\_JARS\_DIR to be 
consistent with the rest of Hadoop.

  was:
* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  
* TOOL_PATH / HADOOP_TOOLS_PATH has been broken apart and replaced with 
HADOOP_TOOLS_HOME, HADOOP_TOOLS_DIR and HADOOP_TOOLS_LIB_JARS_DIR to be 
consistent with the rest of Hadoop.


> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12768) Add a step in the release process to update the release year in Web UI footer

2016-03-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189785#comment-15189785
 ] 

Xiao Chen commented on HADOOP-12768:


Given Yongjun's [initial 
proposal|http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201602.mbox/%3ccaa0w1bt9uoh8pc5nctg5ycdgku2dkpbwqcpr-y2v7h-bqhi...@mail.gmail.com%3E]
 haven't met any objections, and this has been discussed during [2.6.4 
vote|https://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201602.mbox/%3c1455244919423.38...@hortonworks.com%3E]
 which has a broad audience, I plan to go ahead and update the 
[HowToRelease|https://wiki.apache.org/hadoop/HowToRelease] page with the 
content in Yongjun's last comment.

Please let me know if any concerns. Thanks!

> Add a step in the release process to update the release year in Web UI footer
> -
>
> Key: HADOOP-12768
> URL: https://issues.apache.org/jira/browse/HADOOP-12768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> Per the discussion in HDFS-9629, this jira is to propose adding a step in the 
> release process ( https://wiki.apache.org/hadoop/HowToRelease) to update the 
> release year in Web UI footer, when creating RC for a release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Release Note: 
* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  
* TOOL_PATH / HADOOP_TOOLS_PATH has been broken apart and replaced with 
HADOOP_TOOLS_HOME, HADOOP_TOOLS_DIR and HADOOP_TOOLS_LIB_JARS_DIR to be 
consistent with the rest of Hadoop.

  was:
* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  


> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189743#comment-15189743
 ] 

Colin Patrick McCabe commented on HADOOP-12910:
---

+1 for [~steve_l]'s request for a specification.

Can we also have a {{close}} method that uses reference counting on this FS 
object?  Lack of reference counting in {{FileSystem#close}} is a major 
usability problem.

bq. Implementing a linearizability guarantee would significantly complicate 
this effort...

It depends on how it's implemented.  If it's implemented in the "obvious" way 
by having the client only send dependent operations once their dependencies 
have been completed, then there is very little extra complexity.  If the 
dependencies are expressed on the server, then there is significant extra 
complexity.  I think on balance I agree that it might be best not to implement 
this in the initial version.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189741#comment-15189741
 ] 

Mingliang Liu commented on HADOOP-12912:


I'm in favor of replacing log4j logger to slf4j here (as we're doing in other 
classes). Please refer to [HDFS-8971]

I don't quite get the point of performance gain to add a guard here. Adding a 
guard brings no obvious difference as 1) the debug() parameters are string 
literal which are immutable. 2) the LOG.debug() should check the log level 
internally. Would you kindly explain in the description? 

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12912.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12501) Enable SwiftNativeFileSystem to ACLs

2016-03-10 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12501:
-
Summary: Enable SwiftNativeFileSystem to ACLs  (was: Enable 
SwiftNativeFileSystem to preserve user, group, permission)

> Enable SwiftNativeFileSystem to ACLs
> 
>
> Key: HADOOP-12501
> URL: https://issues.apache.org/jira/browse/HADOOP-12501
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Chen He
>Assignee: Chen He
>
> Currently, if user copy file/dir from localFS or HDFS to swift object store, 
> u/g/p will be gone. There should be a way to preserve u/g/p. It will provide 
> benefit for  a large number of files/dirs transferring between HDFS/localFS 
> and Swift object store. We also need to be careful since Hadoop prevent 
> general user from changing u/g/p especially if Kerberos is enabled.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12913) Drop the @LimitedPrivate maker off UGI, as its clearly untrue

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189715#comment-15189715
 ] 

Chris Nauroth commented on HADOOP-12913:


Perhaps this is a duplicate of HADOOP-10776?

> Drop the @LimitedPrivate maker off UGI, as its clearly untrue
> -
>
> Key: HADOOP-12913
> URL: https://issues.apache.org/jira/browse/HADOOP-12913
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> UGI declares itself as
> {code}
> {@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce", "HBase", "Hive", 
> "Oozie"})
> {code}
> Really its "any application that interacts with services in a secure 
> cluster". 
> I propose: replace with {{@Public, @Evolving}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12913) Drop the @LimitedPrivate maker off UGI, as its clearly untrue

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189714#comment-15189714
 ] 

Chris Nauroth commented on HADOOP-12913:


+1 for the proposal.  It's effectively public at this point given the need for 
numerous downstream projects to use it to get anything practical done.  We need 
to treat it as public anyway and make sure code changes are 
backwards-compatible.

I'd also argue that the same change needs to be done for numerous other 
security-related classes: {{SecurityUtil}}, {{Credentials}} and token-related 
stuff.  I'm fine with moving in small steps though if others prefer to keep the 
scope of this JIRA limited to UGI.

> Drop the @LimitedPrivate maker off UGI, as its clearly untrue
> -
>
> Key: HADOOP-12913
> URL: https://issues.apache.org/jira/browse/HADOOP-12913
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> UGI declares itself as
> {code}
> {@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce", "HBase", "Hive", 
> "Oozie"})
> {code}
> Really its "any application that interacts with services in a secure 
> cluster". 
> I propose: replace with {{@Public, @Evolving}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189620#comment-15189620
 ] 

Allen Wittenauer edited comment on HADOOP-12857 at 3/10/16 5:50 PM:


-02:
* documentation 
* eliminate HADOOP_TOOLS_PATH since it makes zero sense anymore with this 
layout and the other capabilities of the shell code in trunk
* rework to hopefully work with Windows. :D

Should I break this apart to send through Jenkins or ... ?


was (Author: aw):
-02:
* documentation 
* eliminate HADOOP_TOOLS_PATH since it makes zero sense anymore with this 
layout and the other capabilities of the shell code in trunk

Should I break this apart to send through Jenkins or ... ?

> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189620#comment-15189620
 ] 

Allen Wittenauer edited comment on HADOOP-12857 at 3/10/16 5:50 PM:


-02:
* documentation 
* eliminate HADOOP_TOOLS_PATH since it makes zero sense anymore with this 
layout and the other capabilities of the shell code in trunk

Should I break this apart to send through Jenkins or ... ?


was (Author: aw):
-02:
* documentation 
* eliminate HADOOP_TOOLS_PATH since it makes zero sense anymore with this 
layout and the other capabilities of the shell code in trunk

> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189621#comment-15189621
 ] 

Chris Nauroth commented on HADOOP-12910:


I am sensing massive scope creep in this discussion.

bq. Actually, one more thing to define in HDFS-9924 and include any 
specification is: linearlizability/serializability guarantees

I'm going to repeat some of my comments from HDFS-9924.  A big motivation for 
this effort is that we often see an application needs to execute a large set of 
renames, where the application has knowledge that there is no dependency 
between the rename operations and no ordering requirements.  Although 
linearizability is certainly nicer to have than not have, use cases like this 
don't need linearizability.

Implementing a linearizability guarantee would significantly complicate this 
effort.  ZooKeeper has an async API with ordering guarantees, and it takes a 
very delicate coordination between client-side and server-side state to make 
that happen.  Instead, I suggest that we focus on what we really need (async 
execution of independent operations) and tell clients that they have 
responsibility to coordinate dependencies between calls.  I also have commented 
on HDFS-9924 that we could later providing a programming model of futures + 
promises as a more elegant way to help callers structure code with multiple 
dependent async calls.  Even that much is not an immediate need though.

This does not preclude providing a linearizability guarantee at some point in 
the future.  I'm just saying that we have an opportunity to provide something 
valuable sooner even without linearizability.

bq. I'm going to be ruthless and say "I'd like to see a specification of this 
alongside the existing one". Because that one has succeeded in being a 
reference point for everyone; we need to continue that for a key binding. It 
should be straightforward here.

Assuming the above project plan is acceptable (no linearizability right now), 
this reduces to a simple statement like "individual async operations adhere to 
the same contract as the corresponding sync operations, and there are no 
guarantees on ordering across multiple async operations."

bq. Is it the future that raises an IOE, or the operation? I can see both 
needing to

Certainly Hadoop-specific exceptions like {{AccessControlException}} and 
{{QuotaExceededException}} must dispatch asynchronously, such as wrapped in an 
{{ExecutionException}}.  You won't know if you're going to hit one of these at 
the time of submitting the call.  My opinion is that if the API is truly async, 
then it implies we cannot perform I/O on the calling thread, and therefore 
cannot throw an {{IOException}} at call time.  I believe Nicholas wants to put 
{{throws IOException}} in the method signatures anyway for ease of 
backwards-compatible changes in the future though, just in case we find a need 
later.  I think that's acceptable.


> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools

2016-03-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Attachment: HADOOP-12857.02.patch

-02:
* documentation 
* eliminate HADOOP_TOOLS_PATH since it makes zero sense anymore with this 
layout and the other capabilities of the shell code in trunk

> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189587#comment-15189587
 ] 

Chris Nauroth commented on HADOOP-12666:


The create/append/flush sequence is hugely different behavior.  At the protocol 
layer, there is the addition of the flush parameter, which is a deviation from 
stock WebHDFS.  Basically any of the custom *Param classes represent deviations 
from WebHDFS protocol: leaseId, ADLFeatureSet, etc.

At the client layer, the aggressive client-side caching and buffering in the 
name of performance creates different behavior from stock WebHDFS.  I and 
others have called out that while perhaps you don't observe anything to be 
broken right now, that's no guarantee that cache consistency won't become a 
problem for certain applications.  This is not a wire protocol difference, but 
it is a significant deviation in behavior from stock WebHDFS.

At this point, it appears that the ADL protocol, while heavily inspired by the 
WebHDFS protocol, is not really a compatible match.  It is its own protocol 
with its own unique requirements for clients to use it correctly and use it 
well.  Accidentally connecting the ADL client to an HDFS cluster would be 
disastrous.  The create/append/flush sequence would cause massive unsustainable 
load to the NameNode in terms of RPC calls and edit logging.  Client write 
latency would be unacceptable.  Likewise, accidentally connecting the stock 
WebHDFS client to ADL seems to yield unacceptable performance for ADL.

It is these large deviations that lead me to conclude the best choice is a 
dedicated client distinct from the WebHDFS client code.  Having full control of 
that client gives us the opportunity to provide the best possible user 
experience with ADL.  As I've stated before though, I can accept a short-term 
plan of some code reuse with the WebHDFS client.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-008.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-03-10 Thread Huizhi Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189565#comment-15189565
 ] 

Huizhi Lu commented on HADOOP-12587:


Thank you for resolving this, Benoy!!

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch, 
> HADOOP-12587-003.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189559#comment-15189559
 ] 

Chris Nauroth commented on HADOOP-12666:


bq. Notes From Mar 9, 2016 Call w/ MSFT

I really should have been in this meeting.  More importantly, any out-of-band 
meeting like this should be announced prior for full disclosure with the Apache 
community.  Was there an announcement that I just missed in the deluge of 
email?  If not, then please pre-announce any similar meetings in the future.

Thank you for posting the summary publicly here though.  That's exactly the 
right thing to do.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-008.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-10 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189533#comment-15189533
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


[~cnauroth] Thanks a lot Chris. We are working on the feasibility of having 
dedicated client and timeline for that. Apart from Create semantics and no 
redirects operation during Create, Append and Read, do you think ADL is 
deviating on any other protocol as well? If yes then could you please highlight 
them so we can take it up mostly as separate JIRA? Our intention is to be as 
close to Hadoop semantics as possible.   

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-008.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-10 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189518#comment-15189518
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


*Notes From Mar 9, 2016 Call w/ MSFT*
Who: Cloudera: Aaron Fabbri, Tony Wu, MSFT: Vishwajeet, Cathy, Chris Douglas, 
Shrikant
*Discussion*
1. Packaging / Code Structure
 - In general, ADL extension of WebHDFS would not be acceptable as long term 
solution
 - Webhdfs client not designed for extension.
 - [Available options as of 
today|https://issues.apache.org/jira/browse/HADOOP-12666?focusedCommentId=15186380=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15186380]
 - Option 1 vs 2 (refactor WebHDFS) vs 3 (copy paste code, bad) 
 - Option 2 (MSFT): Need to make change to WebHDFS to accept ADL stuff. May be 
significant work.
 - Raise a separate JIRA for WebHDFS extension

2. WebHDFS and ADL cannot co-exist problem if both follows OAuth2 
authentication protocol
 - Near term: specify limitation of only one webhdfs client at a time w/ OAUTH. 
 Ok to have Webhdfs non-oauth and ADL configured on same cluster. - AP: 
Vishwajeet to document as known limitation
 - Long term: v2 of adl connector that factors out webhdfs client commonality 
better

3. Integrity / Semantics
 - Single writer semantics?
 - See leaseId in PrivateAzureDataLakeFileSystem::createNonRecursive()
 - Append semantics does not close connection hence the leaseId is not required.

4. Action Items
 - [msft] Put webhdfs extension issue into a separate JIRA so folks from the 
community can comment.  Do they prefer hadoop-azure-datalake mixes packages, or 
relaxing some method privacy, or suggest other approach? - Raised HDFS-9938
 - [msft] volatile not needed in addition to synchronized in 
BatchByteArrayInputStream - AP: Vishwajeet
 - [msft] Add to documentation: caveat for v1 where you can only have one 
WebHDFS (ADL or vanilla) with Oauth2 not both. - AP: Vishwajeet
 - [cloudera] Go over latest patches.
 - [cloudera] Reach out to other hadoop committers to see what else needs 
addressing before we can get committed.
 - [msft/cloudera] Start document on adl:// semantics, deltas versus HDFS, w/ 
and w/o FileStatusCache

5. Follow Up Topics (homework / next meeting)
- Follow up on append().  No leaseid.  What is delta from HDFS semantics.
- BufferManager purpose, coherency
- For readahead, so multiple FSInputStreams can see the same buffer that was 
fetched with readahead.
- Follow up on flushAsync() in write path (why / how)

6. Future plan of ADL client implementation
 - Share with community about future plans
 - Versioning


> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-008.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189492#comment-15189492
 ] 

Allen Wittenauer commented on HADOOP-12910:
---

Classes, errors, etc should be 'whats' not 'whens'.  AsyncFileSystem is 
way better. (Yes, I think Java's Future object is a terrible name.)

Also, calling this a FileSystem sort of underscores that FileContext might as 
well get tossed since not even Hadoop people bother to use it, despite being a 
better defined API.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12626) Intel ISA-L libraries should be added to the Dockerfile

2016-03-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189485#comment-15189485
 ] 

Allen Wittenauer commented on HADOOP-12626:
---

The download needs to happen before the 'YETUS CUT HERE' line if you want to 
enable ISA-L testing during precommit...

> Intel ISA-L libraries should be added to the Dockerfile
> ---
>
> Key: HADOOP-12626
> URL: https://issues.apache.org/jira/browse/HADOOP-12626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Zheng
>Priority: Blocker
> Attachments: HADOOP-12626-v1.patch, HADOOP-12626-v2.patch
>
>
> HADOOP-11887 added a compile and runtime dependence on the Intel ISA-L 
> library but didn't add it to the Dockerfile so that it could be part of the 
> Docker-based build environment (start-build-env.sh).  This needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12914) RPC client should deal with the IP address change

2016-03-10 Thread Michiel Vanderlee (JIRA)
Michiel Vanderlee created HADOOP-12914:
--

 Summary: RPC client should deal with the IP address change
 Key: HADOOP-12914
 URL: https://issues.apache.org/jira/browse/HADOOP-12914
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.2
 Environment: CentOS 7
Reporter: Michiel Vanderlee


I'm seeing HADOOP-7472 again for the datanode in v2.7.2.

If I start the datanode before the dns entry the namenode resolve, it never 
retries to resolve and keeps failing with a UnknownHostException.
A restart or the datanode fixes this.

TRACE ipc.ProtobufRpcEngine: 31: Exception <- 
nn1.hdfs-namenode-rpc.service.consul:8020: versionRequest 
{java.net.UnknownHostException: Invalid host name: local host is: (unknown); 
destination host is: "nn1.hdfs-namenode-rpc.service.consul":8020; 
java.net.UnknownHostException; For more details see:  
http://wiki.apache.org/hadoop/UnknownHost}

The error comes from: 
org.apache.hadoop.ipc.Client..java$Connection line: 409

public Connection(ConnectionId remoteId, int serviceClass) throws IOException {
  this.remoteId = remoteId;
  this.server = remoteId.getAddress();
  if (server.isUnresolved()) {
throw NetUtils.wrapException(server.getHostName(),
server.getPort(),
null,
0,
new UnknownHostException());
  }

The remoteId.address (InetSocketAddress) seems to only resolves on creation, 
never again unless done manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12913) Drop the @LimitedPrivate maker off UGI, as its clearly untrue

2016-03-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189332#comment-15189332
 ] 

Jason Lowe commented on HADOOP-12913:
-

+1, sounds good to me.  IMHO LimitedPrivate has little utility given we now 
support rolling upgrades.  Changing anything marked LimitedPrivate in a 
backwards-incompatible way usually has the same ramifications of doing the same 
to something marked Public, because normally nobody is willing to break the 
listed downstream projects upgrading their cluster.  And like this case, it's 
often difficult or impossible to develop "real" applications without using a 
lot of the things marked LimitedPrivate which is why those downstream projects 
use them.

> Drop the @LimitedPrivate maker off UGI, as its clearly untrue
> -
>
> Key: HADOOP-12913
> URL: https://issues.apache.org/jira/browse/HADOOP-12913
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> UGI declares itself as
> {code}
> {@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce", "HBase", "Hive", 
> "Oozie"})
> {code}
> Really its "any application that interacts with services in a secure 
> cluster". 
> I propose: replace with {{@Public, @Evolving}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12913) Drop the @LimitedPrivate maker off UGI, as its clearly untrue

2016-03-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12913:
---

 Summary: Drop the @LimitedPrivate maker off UGI, as its clearly 
untrue
 Key: HADOOP-12913
 URL: https://issues.apache.org/jira/browse/HADOOP-12913
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.8.0
Reporter: Steve Loughran


UGI declares itself as
{code}
{@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce", "HBase", "Hive", 
"Oozie"})
{code}

Really its "any application that interacts with services in a secure cluster". 

I propose: replace with {{@Public, @Evolving}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native part for ISA-L erasure coder

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189271#comment-15189271
 ] 

Hadoop QA commented on HADOOP-11996:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 56s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792498/HADOOP-11996-v9.patch 
|
| JIRA Issue | HADOOP-11996 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux 3d402e96b5ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 318c9b6 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8835/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-12819) Migrate TestSaslRPC and related codes to rebase on ProtobufRpcEngine

2016-03-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189238#comment-15189238
 ] 

Kai Zheng commented on HADOOP-12819:


Thanks [~wheat9] for reviewing this and the nice comment!

> Migrate TestSaslRPC and related codes to rebase on ProtobufRpcEngine
> 
>
> Key: HADOOP-12819
> URL: https://issues.apache.org/jira/browse/HADOOP-12819
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12819-v1.patch
>
>
> Sub task of HADOOP-12579. To prepare for getting rid of the obsolete 
> WritableRpcEngine, this will change the TestSaslRPC test and the related to 
> use ProtobufRpcEngine instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11996) Native part for ISA-L erasure coder

2016-03-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Attachment: HADOOP-11996-v9.patch

Updated the patch accordingly. Colin, please kindly let me know if it looks 
much better in your sense too. Test passed on Linux and building passed on 
Windows. Thanks.

> Native part for ISA-L erasure coder
> ---
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch, HADOOP-11996-v8.patch, 
> HADOOP-11996-v9.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native part for ISA-L erasure coder

2016-03-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189203#comment-15189203
 ] 

Kai Zheng commented on HADOOP-11996:


Thanks [~cmccabe] for the thorough and insightful review!
bq. Any globals in header files should be defined with the extern keyword, to 
indicate that they are declarations, not definitions. ...
This is a very good educational information for me and will also make the codes 
more professional. Thanks for your time helping explain this!
bq. So pick a .c file to define this global in (probably isal_load.c?) and make 
the header file use extern...
Yes, I will make the change, exactly.
bq. In general, we don't check in JNI-generated header files. Given that this 
is a pre-existing problem, we could fix this in a follow-on JIRA, though.
Agree, I can make the desired change in HADOOP-11540 where it can generate the 
JNI header files according to the Java class files.
bq. Is there any reason for having it when that function already exists?
Yes the duplication was introduced accidentally and can be avoided, though it's 
small.
bq. Is there a reason to keep coder_util.c and erasure_coder.c ...  Similarly, 
should the contents of erasure_code.c also be merged into coder.c? If not, why 
not?
Ah yeah I have to explain and also make some changes accordingly to make the 
intention clear. The codes are organized in two minor layers, the first 
wrapping the ISA-L library and building the main logics for encoding/decoding, 
and the second is very thin JNI part for the Java coders. Why I would prefer to 
have the two parts separate? Actually in my initial codes they're all mixed 
together, then I found it's very hard to debug and fix the logic, from Java to 
the native stack. Then I separated the main logic things out of the JNI 
environment, resulting in erasure_code.c, erasure_coder.c and etc, and wrote 
sample test program separately, run it, debug it and fix it. It made me much 
easy and lightweight, because I don't have to do the cycle in the Hadoop native 
building process, which is time consuming. In this way, you can also note that 
the test program is a very simple one, not any JNI or JVM coupled. So to make 
this intention clear, I will rename the files so all the JNI related files will 
be prefixed with {{jni_}}. Hope this also works for you. 
bq. The distinction between "util" and "coder" seems artificial to me.
I agree. I will rename {{coder_util.c}} to {{jni_common.c}}, because the 
functions contained in it will be shared by at least two coders, RS coder and 
later XOR coder.
bq. The declarations for the dump.c functions are not in dump.h as expected. 
Instead, they seem to be hiding in erasure_coder.h-- that was unexpected.
Yeah, good catch. I will correct it.
bq. I don't think there is a good reason to have an include directory.
I agree right now it's not so necessary as the codes introduced in the folder 
isn't large at all, only 20 files or so.

> Native part for ISA-L erasure coder
> ---
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch, HADOOP-11996-v8.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189183#comment-15189183
 ] 

Hudson commented on HADOOP-11404:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9447/])
HADOOP-11404. Clarify the "expected client Kerberos principal is null" (harsh: 
rev 318c9b68b059981796f2742b4b7ee604ccdc47e5)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java


> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch, HADOOP-11404.002.patch, 
> HADOOP-11404.003.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189177#comment-15189177
 ] 

Harsh J commented on HADOOP-11404:
--

Thanks [~ste...@apache.org]! Committing shortly.

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch, HADOOP-11404.002.patch, 
> HADOOP-11404.003.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189164#comment-15189164
 ] 

Steve Loughran commented on HADOOP-12910:
-

Actually, one more thing to define in HDFS-9924 and include any specification 
is: linearlizability/serializability guarantees

Specificlally will this api make the following guarantee

h3. Serializability/Linearizibility/Atomicity
# The atomicity requirements/guarantees of the {{FileSystem}} API are 
unchanged. (and those blobstores which break them, still broken)

# A series of operations, issued from a single thread against the same instance 
of {{FutureFileSystem}}, will always be executed in the order in which they are 
submitted.

# If at time {{t}}, thread A issues a request, then in the same process, at 
time {{t1 > t}}, thread B issues a filesystem request *against the same 
instance of FutureFileSystem*, then the request by thread A will be executed 
before the request in thread B. 
That is: requests are never-reordered, in a single FS instance, they are 
executed in the order of submission, irrespective of which thread is making the 
submission. 

# If at time {{t}}, thread A issues a request, then in the same process, at 
time {{t1 > t}}, thread B issues a filesystem request *against a different 
instance of FutureFileSystem*, then there are no guarantees of the order of 
execution. Different queue: different outcome.

There's also the ordering across processes and systems. Here you'd need to say 
something like "they are processed in the strict order the NN receives them". 
They may be interleaved, but the actions of each {{FutureFileSystem}} instance 
are executed in a linear order.

Also: parameter/state validation. Basic parameter validity may be checked in 
the initial call (null values, illegal values), but all request validation 
operations which examine the observable state of the FS will not take place 
until the future is actually executed, Thus, the state of the filesystem may 
change between the call being made and it being executed. 

If you don't spell this out, then the semantics of 
{code}
delete("/c");
rename("/a","/c");
rename("/b","/a");
{code}
are undefined (assuming {{/a}} and {{/b}} refer to paths for which {{exists()}} 
holds at the time of the call. The cross-thread serialization guarantee is 
needed to guarantee that any two threads, synchronized by any means, will have 
the ordering of their requests executed according to the in-process 
{{happens-before}} guarantees of the synchronization mechanism.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189069#comment-15189069
 ] 

Hadoop QA commented on HADOOP-12912:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 12 unchanged - 5 fixed = 12 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 52s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ha.TestZKFailoverController |
| JDK v1.7.0_95 Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792477/HADOOP-12912.001.patch
 |
| JIRA Issue | HADOOP-12912 |
| Optional Tests |  asflicense  

[jira] [Commented] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189009#comment-15189009
 ] 

Akira AJISAKA commented on HADOOP-12912:


We are removing unnecessarily guarding from Hadoop source-tree by moving to 
slf4j, so generally adding guarding seems not to be a good idea. However, if 
{{Progress.set(float progress)}} is called from hot path and the {{progress}} 
is not in \[0, 1\], there are many String instance creations and the cost 
becomes higher.
Therefore I'm +1 for adding guards. Would you add a comment that the method is 
called from hot path and that's why we need guarding to save the cost of 
creating String instances?

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12912.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188971#comment-15188971
 ] 

Steve Loughran commented on HADOOP-11404:
-

+1 once checkstyle is happy

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch, HADOOP-11404.002.patch, 
> HADOOP-11404.003.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188969#comment-15188969
 ] 

Hadoop QA commented on HADOOP-11404:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 31s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 32s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792469/HADOOP-11404.003.patch
 |
| JIRA Issue | HADOOP-11404 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fdd327eb4885 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188967#comment-15188967
 ] 

Steve Loughran commented on HADOOP-12910:
-

# I'm going to be ruthless and say "I'd like to see a specification of this 
alongside the existing one". Because that one has succeeded in being a 
reference point for everyone; we need to continue that for a key binding. It 
should be straightforward here.

# Is it the future that raises an IOE, or the operation? I can see both needing 
to
# assuming this is targed @ Hadoop 3, it'd be nice to make sure this works 
really well with the Java 8 language features; maybe this could be the first 
use in the codebase.


> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12912:

Attachment: HADOOP-12912.001.patch

Attaching a patch to fix this problem.

[~ajisakaa] could you review it?

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-12912.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12912:

Assignee: Tsuyoshi Ozawa
  Status: Patch Available  (was: Open)

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12912.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188946#comment-15188946
 ] 

Tsuyoshi Ozawa commented on HADOOP-12912:
-

TEZ-2215 reported this since it can be called from hot path(MergeQueue.next).

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-03-10 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12912:
---

 Summary: Add LOG.isDebugEnabled() guard in Progress.set method
 Key: HADOOP-12912
 URL: https://issues.apache.org/jira/browse/HADOOP-12912
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12908) Make JvmPauseMonitor a singleton

2016-03-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188910#comment-15188910
 ] 

Hadoop QA commented on HADOOP-12908:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 17s 
{color} | {color:red} root: patch generated 4 new + 553 unchanged - 4 fixed = 
557 total (was 557) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 45s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 1s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 34s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s 
{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 

[jira] [Updated] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2016-03-10 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-11404:
-
Attachment: HADOOP-11404.003.patch

Fixing the indent issue. Retrying.

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch, HADOOP-11404.002.patch, 
> HADOOP-11404.003.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)