[jira] [Commented] (HADOOP-13601) Typo in a log messages

2016-09-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487514#comment-15487514
 ] 

Sean Busbey commented on HADOOP-13601:
--

good finds! interested in coming up with a patch for fixing these? [Our 
contributor guide|http://wiki.apache.org/hadoop/HowToContribute] covers 
everything needed to work across the code base, but I'd imagine for these 
things a simple manual test of compilation before and after the changes should 
suffice.

> Typo in a log messages
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13603) Remove package line length checkstyle rule

2016-09-13 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487631#comment-15487631
 ] 

Shane Kumpf commented on HADOOP-13603:
--

Per the comments in YARN-5628, I will start a DISCUSS thread on the -dev lists.

> Remove package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Trivial
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-13218:

Status: Patch Available  (was: Reopened)

Submitting the patch for precommit. Also marked it as an incompatible change. 
We should populate the release note filed, so that other downstream projects or 
users can be informed.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487596#comment-15487596
 ] 

Tsuyoshi Ozawa commented on HADOOP-13602:
-

* CompileMojo, TestMojo: calling toLowerCase with locale.
* ProtocMojo: adding null check to avoid NPE
* Exec: Handling IOE/IE instead of Exception class
* VersionInfoMojo: removing needless try-catch and creating MD5Comparator to 
avoid SIC_INNER_SHOULD_BE_STATIC_ANON.


> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487621#comment-15487621
 ] 

Hadoop QA commented on HADOOP-13602:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828286/HADOOP-13602.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bcac9bd9af6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e793309 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10494/testReport/ |
| modules | C: hadoop-maven-plugins U: hadoop-maven-plugins |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10494/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13602:
---

 Summary: Fix findbugs warning in hadoop-maven-plugin
 Key: HADOOP-13602
 URL: https://issues.apache.org/jira/browse/HADOOP-13602
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13594:

Status: Open  (was: Patch Available)

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487582#comment-15487582
 ] 

Tsuyoshi Ozawa commented on HADOOP-13594:
-

[~arpiagariu] filed HADOOP-13602 for addressing the warnings. 
[~busbey] I'm assuming merging this after addressing all findbugs warnings. 

{quote}
what does this buy us over the current flagging from precommit checks for 
findbugs errors? 
{quote}

Unfortunately, findbugs warnings gets overlooked sometimes(e.g. HADOOP-11602 
calling String.toUpper/toLower without configuring locale). If the compile 
fails when findbugs generate warnings, reviewers can notify them easily. 
However, there is also cons: compile time will increase. 

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13603) Remove package line length checkstyle rule

2016-09-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13603:
-
Component/s: build

> Remove package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Priority: Trivial
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-13218:

Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13601) Typo in a log messages

2016-09-13 Thread Mehran Hassani (JIRA)
Mehran Hassani created HADOOP-13601:
---

 Summary: Typo in a log messages
 Key: HADOOP-13601
 URL: https://issues.apache.org/jira/browse/HADOOP-13601
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mehran Hassani
Priority: Trivial


I am conducting research on log related bugs. I tried to make a tool to fix 
repetitive yet simple patterns of bugs that are related to logs. Typos in log 
messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
in log statements. During my experiments, I managed to find the following typos 
in Hadoop Common:

in file 
/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
 LOG.info("Token cancelation requested for identifier: "+id), 
cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13603) Remove package line length checkstyle rule

2016-09-13 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned HADOOP-13603:


Assignee: Shane Kumpf

> Remove package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Trivial
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487794#comment-15487794
 ] 

Arpit Agarwal commented on HADOOP-13602:


+1

> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13602:

Assignee: Tsuyoshi Ozawa
  Status: Patch Available  (was: Open)

> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13602:

Attachment: HADOOP-13602.001.patch

> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13594:
-
Component/s: build

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-13 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487396#comment-15487396
 ] 

Genmao Yu commented on HADOOP-13591:


Thanks for your patient review. 

> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487524#comment-15487524
 ] 

Sean Busbey commented on HADOOP-13594:
--

the effect would be maven failing the build.

what does this buy use over the current flagging from precommit checks for 
findbugs errors? are the error messages sufficient to guide a new contributor 
in fixing things themselves, or is this likely to result in new folks 
complaining about the build breaking in confusing ways when their "simple" fix 
includes a findbugs error?

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487524#comment-15487524
 ] 

Sean Busbey edited comment on HADOOP-13594 at 9/13/16 3:29 PM:
---

the effect would be maven failing the build.

what does this buy us over the current flagging from precommit checks for 
findbugs errors? are the error messages sufficient to guide a new contributor 
in fixing things themselves, or is this likely to result in new folks 
complaining about the build breaking in confusing ways when their "simple" fix 
includes a findbugs error?


was (Author: busbey):
the effect would be maven failing the build.

what does this buy use over the current flagging from precommit checks for 
findbugs errors? are the error messages sufficient to guide a new contributor 
in fixing things themselves, or is this likely to result in new folks 
complaining about the build breaking in confusing ways when their "simple" fix 
includes a findbugs error?

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13603) Remove package line length checkstyle rule

2016-09-13 Thread Shane Kumpf (JIRA)
Shane Kumpf created HADOOP-13603:


 Summary: Remove package line length checkstyle rule
 Key: HADOOP-13603
 URL: https://issues.apache.org/jira/browse/HADOOP-13603
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Shane Kumpf
Priority: Trivial


The packages related to the DockerLinuxContainerRuntime all exceed the 80 char 
line length limit enforced by checkstyle. This causes every build to fail with 
a -1. I would like to exclude this rule from causing a failure.

{code}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
 org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
Line is longer than 80 characters (found 84).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
 org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
Line is longer than 80 characters (found 81).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
 Line is longer than 80 characters (found 88).
{code}

Alternatively, we could look to restructure the packages here, but I question 
what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487620#comment-15487620
 ] 

Sean Busbey commented on HADOOP-13594:
--

{quote}
bq. what does this buy us over the current flagging from precommit checks for 
findbugs errors?

Unfortunately, findbugs warnings gets overlooked sometimes(e.g. HADOOP-11602 
calling String.toUpper/toLower without configuring locale). If the compile 
fails when findbugs generate warnings, reviewers can notify them easily. 
However, there is also cons: compile time will increase.
{quote}

I am extremely skeptical of fixing people-problems (like committers not 
following the agreed criteria for commits and ignoring precommit feedback) with 
heavy-handed tech choices. This effectively removes our ability to ignore a 
false-positive and I suspect makes the on-boarding process for new contributors 
unnecessarily hostile.

{quote}
Sean Busbey I'm assuming merging this after addressing all findbugs warnings.
{quote}

This won't help if the new contribution is what introduces a findbugs warning. 
I would prefer we not hit new folks directly with the build failing without 
there being sufficient guidance for them to fix things on their own. Could you 
paste a sample output from a failed run? Maybe I'm overly concerned about the 
impact this will have on the new contributor experience.


> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS to use Jetty

2016-09-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487809#comment-15487809
 ] 

John Zhuge commented on HADOOP-13597:
-

[~bobhansen] commented in HADOOP-10860:
bq. [~wheat9] converted the DN side of webhdfs to use Netty for performance and 
stability. He may have some experience to share.


> Switch KMS to use Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-09-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-7363:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.9.0
Target Version/s: 2.9.0
  Status: Resolved  (was: Patch Available)

Since we decided that this is not going into branch-2, resolving this issue. 
[~boky01] Thank you for the contribution.

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Attachment: HADOOP-13605-branch-2-001.patch

Patch 001

code changes are all around FS load diagnostics and logging.
otherwise: javadoc fixes, some formatting, moving to Java7 <> syntax in 
constructors.

Essentially: housekeeping

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487985#comment-15487985
 ] 

Hadoop QA commented on HADOOP-13590:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 148 unchanged - 0 fixed = 150 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828190/HADOOP-13590.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b9435787cae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3f7f58 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10496/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10496/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10496/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  

[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488129#comment-15488129
 ] 

Hadoop QA commented on HADOOP-13218:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 2 new + 817 unchanged 
- 46 fixed = 819 total (was 863) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
32s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
6s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13218 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828161/HADOOP-13218-v03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux c6fa2de73ee3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3f7f58 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488213#comment-15488213
 ] 

Chris Nauroth commented on HADOOP-13599:


{code}
  private static final AtomicBoolean closed = new AtomicBoolean(false);
{code}

Please make this a member variable, so multiple {{S3AFileSystem}} instances 
created within the same process can get closed independently.  I assume this is 
just a copy-paste error from {{warnedOfCoreThreadDeprecation}}, where 
{{static}} is appropriate.

{code}
  @Test
  public void testCloseReentrant() throws Throwable {
conf = new Configuration();
fs = S3ATestUtils.createTestFileSystem(conf);
fs.close();
fs.close();
  }
{code}

I suggest changing the name of this test method, because it doesn't really 
cover reentrancy.  It does cover idempotence though, so maybe 
{{testCloseIdempotent}}?

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13573:
--
Fix Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488036#comment-15488036
 ] 

Hudson commented on HADOOP-13546:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10436/])
HADOOP-13546. Override equals and hashCode of the default retry policy (jing9: 
rev 08d8e0ba259f01465a83d8db09466dfd46b7ec81)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/retry/TestConnectionRetryPolicy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryUtils.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestReuseRpcConnections.java


> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, 
> HADOOP-13546-HADOOP-13436.007.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-09-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488049#comment-15488049
 ] 

Wei-Chiu Chuang commented on HADOOP-12974:
--

I'll review it in a day or two. Thanks a lot for working on this.

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488085#comment-15488085
 ] 

Hadoop QA commented on HADOOP-12974:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-12974 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12974 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806708/HADOOP-12974v5.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10499/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488082#comment-15488082
 ] 

Chris Nauroth commented on HADOOP-13169:


[~rajesh.balamohan], thank you for this patch, and thank you also for the 
performance testing on both S3A and HDFS.  I have a few comments.

{code}
  private int fileStatusLimit = 1000;
  private boolean randomizeFileListing;
{code}

Please make both of these {{final}}, because they only need to be assigned 
inside the constructor.

{code}
LOG.info("numListstatusThreads=" + numListstatusThreads
+ ", fileStatusLimit=" + fileStatusLimit
+ ", randomizeFileListing=" + randomizeFileListing);
{code}

{code}
  LOG.info("Number of paths written to fileListing=" + 
fileStatusInfoList.size());
{code}

Would you please switch these to debug level?  These messages are more like the 
debug level logging that this class already does.

{code}
  List fileStatuses =
  Collections.synchronizedList(new ArrayList());
{code}

{code}
  List statusList =
  Collections.synchronizedList(new ArrayList());
{code}

Is the locking on these lists necessary?  If I am reading this code correctly, 
concurrency happens through the {{ProducerConsumer}} executing 
{{FileStatusProcessor}} in parallel, but I don't see the 
{{FileStatusProcessor}} threads accessing these lists.  Instead, it appears 
that completed work is pulled back out of the {{ProducerConsumer}} and added to 
the lists on a single thread.  (If I am misreading this code, please let me 
know.)

{code}
  List srcPaths = new ArrayList();
{code}

Lines like this can use the Java 7 diamond operator, so {{new ArrayList<>()}}.  
This patch is targeted to 2.8.0, and we require at least Java 7 on that branch, 
so using the diamond operator is fine.

{code}
Path p = new Path("/tmp/", String.valueOf(i));
{code}

Minor nit: please remove the trailing slash from "/tmp/", as the {{Path}} 
constructor will normalize and strip trailing slashes anyway.

{code}
OutputStream out = fs.create(fileName);
out.write(i);
out.close();
{code}

Please use try-finally or try-with-resources to guarantee close of each stream. 
 It's unlikely that this will ever be a problem in practice, but some platforms 
like Windows are very picky about leaked file handles.

{code}
Assert.fail("Should have failed as file listing should be randomized");
{code}

Is there a random chance that this assertion could fail if the call to 
{{Collections#shuffle}} results in the same order that was input?

{code}
} catch (IOException e) {
  LOG.error("Exception encountered ", e);
  Assert.fail("Test build listing failed");
{code}

I suggest not catching the exception and simply letting it be thrown out of the 
test method.  If the test fails due to an exception, this would make the 
problem more visible in the JUnit report on Jenkins.

I know the other tests in this suite are coded to catch the exception.  No need 
to go back and clean up all of those as part of this patch.  (I'd still +1 it 
though if you feel like doing that clean-up.)

{code}
Assert.assertEquals(fs.makeQualified(srcFiles.get(idx)), 
currentVal.getPath());
{code}

For this assertion, I suggest passing a descriptive message including the value 
of {{idx}} as the first argument.  That way, if we ever see a failure, we'll 
have more information about which iteration of the loop failed.

The Checkstyle warnings are for exceeding the maximum line length.  Please fix 
those in the next patch revision.


> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487851#comment-15487851
 ] 

Arpit Agarwal commented on HADOOP-13602:


There's one remaining warning:
{code}
[INFO] 
org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo.getSvnUriInfo(String)
 uses String.indexOf(String) instead of String.indexOf(int) 
["org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo"] At 
VersionInfoMojo.java:[lines 49-341]
{code}
Okay to fix that either here or separately.

> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13604) Abort retry loop when RPC has an unrecoverable error

2016-09-13 Thread Henry Robinson (JIRA)
Henry Robinson created HADOOP-13604:
---

 Summary: Abort retry loop when RPC has an unrecoverable error
 Key: HADOOP-13604
 URL: https://issues.apache.org/jira/browse/HADOOP-13604
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Henry Robinson


I've seen an issue where, after an RPC client hit an error obtaining a TGT from 
Kerberos, the RPC client continues to retry even though there's no chance of 
success (the no login window is set to 600s).

In this particular deployment, the client retries 15 times at 15s intervals, 
leading to a delay of more than three minutes before the failure is bubbled up 
to the client when the RPC ultimately fails.

Unrecoverable errors (like failures to login to Kerberos) should lead to fast 
aborts of the retry loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13605:
---

 Summary: Clean up FileSystem javadocs, logging; improve 
diagnostics on FS load
 Key: HADOOP-13605
 URL: https://issues.apache.org/jira/browse/HADOOP-13605
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


We can't easily debug FS instantiation problems as there isn't much detail in 
what was going on.

We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
—the class is used in too many places, including tests which cast it. Instead, 
add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 

While working in the base FileSystem class, take the opportunity to clean up 
javadocs and comments

# add the list of exceptions, including indicating which base classes throw 
UnsupportedOperationExceptions
# cut bits in the comments which are not true

The outcome of this patch is that IDEs shouldn't highlight most of the file as 
flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-09-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-13546:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk, branch-2 and branch-2.8. Thanks for the 
contribution [~xiaobingo]!

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, 
> HADOOP-13546-HADOOP-13436.007.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13606:

Status: Patch Available  (was: Open)

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13606:

Attachment: HADOOP-13606-branch-2-001.patch

move to a service file

tested in hadoop-openstack against RAX US; also tested in spark 

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-09-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13452:
--
Attachment: HADOOP-13452.001.patch

Attaching v1 patch for LocalMetadataStore and unit test.  I wanted to get this 
out for review.  Tests pass and test-patch was clean.  This patch applies on 
top of HADOOP-13573

Still need to do:

- Filter incoming Paths.. something like S3AFileSystem#pathToKey()
- LruHashMap should have a unit test.


> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13606:
---
Hadoop Flags: Reviewed

+1.

I'm not currently set up for end-to-end testing against an OpenStack instance.  
I did build a distro, deploy it, and confirm that shell commands for a swift: 
URI dispatch successfully into the hadoop-openstack code, so I know the class 
loading through the service loader is working fine.

Steve, thank you for sharing your test pass results.

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488193#comment-15488193
 ] 

Chris Nauroth edited comment on HADOOP-13606 at 9/13/16 7:42 PM:
-

+1 pending pre-commit.

I'm not currently set up for end-to-end testing against an OpenStack instance.  
I did build a distro, deploy it, and confirm that shell commands for a swift: 
URI dispatch successfully into the hadoop-openstack code, so I know the class 
loading through the service loader is working fine.

Steve, thank you for sharing your test pass results.


was (Author: cnauroth):
+1.

I'm not currently set up for end-to-end testing against an OpenStack instance.  
I did build a distro, deploy it, and confirm that shell commands for a swift: 
URI dispatch successfully into the hadoop-openstack code, so I know the class 
loading through the service loader is working fine.

Steve, thank you for sharing your test pass results.

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487992#comment-15487992
 ] 

Hadoop QA commented on HADOOP-13573:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13573 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827942/HADOOP-13573.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b0051298887d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / db6d243 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10497/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: 

[jira] [Created] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13606:
---

 Summary: swift FS to add a service load metadata file
 Key: HADOOP-13606
 URL: https://issues.apache.org/jira/browse/HADOOP-13606
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Affects Versions: 2.7.3
Reporter: Steve Loughran
Assignee: Steve Loughran


add a metadata file giving the FS impl of swift; remove the entry from 
core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488855#comment-15488855
 ] 

Aaron Fabbri commented on HADOOP-13573:
---

Thanks for the review [~cnauroth].  This is all good feedback.   I don't know 
what I was thinking in that try-with-resources.. haha.  I'll have a followup 
patch shortly.

> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488762#comment-15488762
 ] 

Xiao Chen commented on HADOOP-7352:
---

Thanks John for continuing the work here!
Looks like my comments in HADOOP-13191 are addressed, and I think the patch 
looks fine in general. Would like [~ste...@apache.org] and others' review too.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488782#comment-15488782
 ] 

Chris Nauroth commented on HADOOP-13573:


[~fabbri], thank you for the patch.  The structure looks good overall.  Here 
are a few comments.

{code}
  public AbstractMSContract() { };
{code}

I suggest removing this line, since Java will give us a default constructor 
automatically.

{code}
try (MetadataStore ms = contract.getMetadataStore()) {
  assertNotNull("null MetadataStore", ms);
  ms.initialize(contract.getConf());
  this.ms = ms;
}
{code}

I think we exit this block with the {{MetadataStore}} closed because of the 
try-with-resources.  It probably wasn't noticeable while testing with 
{{LocalMetadataStore}}, because that has a no-op {{close}}.

{code}
assertDirectorySize("/ADirectory1/db1/", 1);
{code}

{code}
assertDirectorySize("/ADirectory1/db1/", 2);
{code}

I suggest removing trailing slashes from all path strings to avoid a potential 
source of confusion.  The {{Path}} constructor will normalize and drop the 
trailing slashes anyway.

{code}
  @Test
  public void testDeleteRecursiveRoot() throws Exception {
setUpDeleteTest();

ms.deleteSubtree(new Path("/"));
{code}

I don't expect the {{FileSystem}} to call the {{MetadataStore}} this way, 
because deletion of root should be rejected.  For the DynamoDB implementation, 
I don't think it can issue a meaningful delete for root, because root does not 
have a parent, so it won't fit well in the schema.  This is not a big deal 
though, and we can revisit during the DynamoDB implementation.

{code}
assertEquals("File size as expected", meta.getFileStatus().getLen(), 100);
{code}

I think the "expected" and "actual" arguments were inverted in this assertion.

{code}
meta = ms.get(new Path("bollocks"));
{code}

Excellent test data!  :-)

Suggestions for additional tests:
* {{put}} overwriting an existing path.
* {{delete}} a path that doesn't exist.  (Does nothing, but should complete 
without exception.)
* Same thing for {{deleteSubtree}}.
* Some tests already check the returned value of {{FileStatus#getLen}}.  It 
would be good to do the same for {{isEmptyDirectory}} (on directories) and 
{{getModificationTime}} and {{getBlockSize}} (on files).
In general, assertions on values of {{FileStatus}} properties: 
isEmptyDirectory, mtime, blockSize, length.


> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488861#comment-15488861
 ] 

Tsuyoshi Ozawa commented on HADOOP-13594:
-

[~busbey] thanks for your feedback. Sounds reasonable. I would like to close 
this :-)

Please feel free to reopen this if someone is interested in.

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13052) ChecksumFileSystem mishandles crc file permissions

2016-09-13 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1544#comment-1544
 ] 

Chris Trezzo commented on HADOOP-13052:
---

Thanks!

> ChecksumFileSystem mishandles crc file permissions
> --
>
> Key: HADOOP-13052
> URL: https://issues.apache.org/jira/browse/HADOOP-13052
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13052.patch
>
>
> CheckFileSystem does not override permission related calls to apply those 
> operations to the hidden crc files.  Clients may be unable to read the crcs 
> if the file is created with strict permissions and then relaxed.
> The checksum fs is designed to work with or w/o crcs present, so it silently 
> ignores FNF exceptions.  The java file stream apis unfortunately may only 
> throw FNF, so permission denied becomes FNF resulting in this bug going 
> silently unnoticed.
> (Problem discovered via public localizer.  Files are downloaded as 
> user-readonly and then relaxed to all-read.  The crc remains user-readonly)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13052) ChecksumFileSystem mishandles crc file permissions

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13052:
-
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial).

> ChecksumFileSystem mishandles crc file permissions
> --
>
> Key: HADOOP-13052
> URL: https://issues.apache.org/jira/browse/HADOOP-13052
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13052.patch
>
>
> CheckFileSystem does not override permission related calls to apply those 
> operations to the hidden crc files.  Clients may be unable to read the crcs 
> if the file is created with strict permissions and then relaxed.
> The checksum fs is designed to work with or w/o crcs present, so it silently 
> ignores FNF exceptions.  The java file stream apis unfortunately may only 
> throw FNF, so permission denied becomes FNF resulting in this bug going 
> silently unnoticed.
> (Problem discovered via public localizer.  Files are downloaded as 
> user-readonly and then relaxed to all-read.  The crc remains user-readonly)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13608) DelegationTokenAuthenticationFilter doesn't provide the impersonator information

2016-09-13 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HADOOP-13608:
-

 Summary: DelegationTokenAuthenticationFilter doesn't provide the 
impersonator information
 Key: HADOOP-13608
 URL: https://issues.apache.org/jira/browse/HADOOP-13608
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Hrishikesh Gadre


Currently DelegationTokenAuthenticationFilter doesn't provide the information 
about the impersonator (e.g. user name). This is typically useful for audit 
perspective e.g. to find out which intermediate application was used to submit 
a specific request. We should provide this information as a request attribute.

Please refer to SENTRY-1122 for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12810) FileSystem#listLocatedStatus causes unnecessary RPC calls

2016-09-13 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488792#comment-15488792
 ] 

Chris Trezzo commented on HADOOP-12810:
---

Thanks!

> FileSystem#listLocatedStatus causes unnecessary RPC calls
> -
>
> Key: HADOOP-12810
> URL: https://issues.apache.org/jira/browse/HADOOP-12810
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Affects Versions: 2.7.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12810.1.patch
>
>
> {{FileSystem#listLocatedStatus}} lists the files in a directory and then 
> calls {{getFileBlockLocations(stat.getPath(), ...)}} for each instead of 
> {{getFileBlockLocations(stat, ...)}}. That function with the path arg just 
> calls {{getFileStatus}} to get another file status from the path and calls 
> the file status version, so this ends up calling {{getFileStatus}} 
> unnecessarily.
> This is particularly bad for S3, where {{getFileStatus}} is expensive. 
> Avoiding the extra call improved input split calculation time for a data set 
> in S3 by ~20x: from 10 minutes to 25 seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7198) Hadoop defaults for web UI ports often fall smack in the middle of Linux ephemeral port range

2016-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-7198.
-
Resolution: Duplicate

Believe this is handled by HDFS-9427 and related JIRAs. Resolving; any 
remaining ports in the ephemeral range can be addressed in specific JIRAs.

> Hadoop defaults for web UI ports often fall smack in the middle of Linux 
> ephemeral port range
> -
>
> Key: HADOOP-7198
> URL: https://issues.apache.org/jira/browse/HADOOP-7198
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Philip Zeyliger
>Priority: Trivial
>
> It turns out (see http://en.wikipedia.org/wiki/Ephemeral_port and  
> /proc/sys/net/ipv4/ip_local_port_range) that when you bind to port 0, Linux 
> chooses an ephemeral port.  On my default-ridden Ubuntu Maverick box and on 
> CentOS 5.5, that range is 32768-61000.  So, when HBase binds to 60030 or when 
> mapReduce binds to 50070, there's a small chance that you'll conflict with, 
> say, an FTP session, or with some other Hadoop daemon that's had a listening 
> address configured as :0.
> I don't know that there's a practical resolution here, since changing the 
> defaults seems like an ill-fated effort, but if you have any ephemeral port 
> use, you can run into this.  We've now run into it once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488672#comment-15488672
 ] 

Kai Zheng commented on HADOOP-13218:


[~kihwal], thanks for looking at this.

bq. Also marked it as an incompatible change. We should populate the release 
note filed, so that other downstream projects or users can be informed.
Agree. Actually this was planned and meant to be done in the master JIRA 
HADOOP-12579 when all its sub-tasks are done. Kindly note this was just a 
sub-task migrating the left tests in Hadoop side and doesn't mean to have real 
impact like incompatible changes on production codes. As [~jlowe] pointed 
above, it's in fact a mistake that the default RPC engine was changed. I'd 
review [~zhouwei]'s updated patch and make sure this was corrected.

Sounds good to you?

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13573:
---
Affects Version/s: (was: 2.9.0)
 Target Version/s: HADOOP-13345  (was: 2.9.0)
Fix Version/s: (was: 3.0.0-alpha2)

> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12810) FileSystem#listLocatedStatus causes unnecessary RPC calls

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12810:
-
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial). I'll also get MAPREDUCE-6637.

> FileSystem#listLocatedStatus causes unnecessary RPC calls
> -
>
> Key: HADOOP-12810
> URL: https://issues.apache.org/jira/browse/HADOOP-12810
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Affects Versions: 2.7.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Fix For: 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12810.1.patch
>
>
> {{FileSystem#listLocatedStatus}} lists the files in a directory and then 
> calls {{getFileBlockLocations(stat.getPath(), ...)}} for each instead of 
> {{getFileBlockLocations(stat, ...)}}. That function with the path arg just 
> calls {{getFileStatus}} to get another file status from the path and calls 
> the file status version, so this ends up calling {{getFileStatus}} 
> unnecessarily.
> This is particularly bad for S3, where {{getFileStatus}} is expensive. 
> Avoiding the extra call improved input split calculation time for a data set 
> in S3 by ~20x: from 10 minutes to 25 seconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13594) findbugs warnings to block a build

2016-09-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved HADOOP-13594.
-
Resolution: Not A Bug

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13607) Specify and test contract for FileSystem#close.

2016-09-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13607:
---
Target Version/s: 2.9.0

> Specify and test contract for FileSystem#close.
> ---
>
> Key: HADOOP-13607
> URL: https://issues.apache.org/jira/browse/HADOOP-13607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> This issue proposes to enhance the {{FileSystem}} specification by describing 
> the expected semantics of {{FileSystem#close}} and adding corresponding 
> contract tests.  Notable aspects are that the method must be idempotent as 
> dictated by {{java.io.Closeable}} and closing also interacts with the 
> delete-on-exit feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13607) Specify and test contract for FileSystem#close.

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488263#comment-15488263
 ] 

Chris Nauroth commented on HADOOP-13607:


I'd also like to add tests to {{AbstractContractCreateTest}} and 
{{AbstractContractOpenTest}} that test idempotence of the stream {{close}} 
methods.

> Specify and test contract for FileSystem#close.
> ---
>
> Key: HADOOP-13607
> URL: https://issues.apache.org/jira/browse/HADOOP-13607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> This issue proposes to enhance the {{FileSystem}} specification by describing 
> the expected semantics of {{FileSystem#close}} and adding corresponding 
> contract tests.  Notable aspects are that the method must be idempotent as 
> dictated by {{java.io.Closeable}} and closing also interacts with the 
> delete-on-exit feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488349#comment-15488349
 ] 

Hadoop QA commented on HADOOP-13606:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828316/HADOOP-13606-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 65e7f85b0e57 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488472#comment-15488472
 ] 

Chris Nauroth commented on HADOOP-13606:


No new tests necessary, and the whitespace warnings aren't really related.

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-09-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488598#comment-15488598
 ] 

Xiaobing Zhou commented on HADOOP-13546:


Thank you [~jingzhao] for committing it.

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, 
> HADOOP-13546-HADOOP-13436.007.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488649#comment-15488649
 ] 

Kai Zheng commented on HADOOP-11828:


bq. I want to know weather Hitch Hiker is attached to Hadoop or not and if it 
is attached I want to know more about its commands.
Good question. Yes and no. Hitchhiker coder is already in the codebase in 
Hadoop common side, but not attached to HDFS side yet. Currently HDFS uses raw 
erasure coder API doing all the work, but HH coder is implemented in erasure 
coder API. Evolving HDFS towards using erasure coders is still on going, but 
kept as low priority in my side. I'd like to provide some help if anyone would 
push this, though. 

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Fix For: 3.0.0-alpha1
>
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HADOOP-11828-hitchhikerXOR-V6.patch, HADOOP-11828-hitchhikerXOR-V7.patch, 
> HADOOP-11828-v8.patch, HDFS-7715-hhxor-decoder.patch, 
> HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13607) Specify and test contract for FileSystem#close.

2016-09-13 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13607:
--

 Summary: Specify and test contract for FileSystem#close.
 Key: HADOOP-13607
 URL: https://issues.apache.org/jira/browse/HADOOP-13607
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth


This issue proposes to enhance the {{FileSystem}} specification by describing 
the expected semantics of {{FileSystem#close}} and adding corresponding 
contract tests.  Notable aspects are that the method must be idempotent as 
dictated by {{java.io.Closeable}} and closing also interacts with the 
delete-on-exit feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488623#comment-15488623
 ] 

Kai Zheng commented on HADOOP-11828:


Hi [~rashmikv], 

Sorry for not seeing your question and the late reply. By compatible or 
inter-operable, I mean the encoded data by a coder can be decoded by another 
coder. The new Java coder is compatible with the native ISA-L coder in this 
sense. In some environment in your clients you may not have Hadoop native setup 
so only pure Java solution will do the work, but your other clients and the 
cluster use Hadoop native so the native ISA-L based coder can work for better 
performance. In other words, it's possible to use more than one coder 
implementations in a cluster and its clients, so the requirement for such 
implementation should be compatible with others, otherwise the data will be 
messy.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Fix For: 3.0.0-alpha1
>
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HADOOP-11828-hitchhikerXOR-V6.patch, HADOOP-11828-hitchhikerXOR-V7.patch, 
> HADOOP-11828-v8.patch, HDFS-7715-hhxor-decoder.patch, 
> HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488249#comment-15488249
 ] 

Hadoop QA commented on HADOOP-13605:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 79 unchanged - 50 fixed = 87 total (was 129) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 48 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-common-project_hadoop-common-jdk1.8.0_101 with 
JDK v1.8.0_101 generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-common-project_hadoop-common-jdk1.7.0_111 with 
JDK v1.7.0_111 generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 10s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.fs.TestFileSystemCaching |
| JDK v1.7.0_111 Failed junit tests | hadoop.fs.TestFileSystemCaching |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13605 |
| JIRA Patch URL | 

[jira] [Created] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-13 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-13610:
--

 Summary: Clean up AliyunOss integration tests
 Key: HADOOP-13610
 URL: https://issues.apache.org/jira/browse/HADOOP-13610
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
 Fix For: HADOOP-12756


Noticed some clean up can be done to the tests, major following some 
conventions like done for others (Azure). For example:
1. OSSContract => AliyunOSSFileSystemContract
2. OSSTestUtils => AliyunOSSTestUtils
3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489144#comment-15489144
 ] 

Wei Zhou commented on HADOOP-13218:
---

I have checked the checkstyle issues and think it's not suitable to be fixed as 
suggested, thanks!

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489164#comment-15489164
 ] 

Hadoop QA commented on HADOOP-13591:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828374/HADOOP-13591-HADOOP-12756.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 7d327538f4f1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 60f66a9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10501/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10501/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10501/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
>  

[jira] [Created] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-13 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-13609:
--

 Summary: Refine credential provider related codes for AliyunOss 
integration
 Key: HADOOP-13609
 URL: https://issues.apache.org/jira/browse/HADOOP-13609
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-12756
Reporter: Kai Zheng


looking at the AliyunOss integration codes, some findings:
1. {{TemporaryAliyunCredentialsProvider}} could be better named;
2. TemporaryAliyunCredentialsProvider shared many codes with 
{{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-13 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: HADOOP-13591-HADOOP-12756.003.patch

> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch, HADOOP-13591-HADOOP-12756.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489195#comment-15489195
 ] 

Hudson commented on HADOOP-13218:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10440 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10440/])
HADOOP-13218. Migrate other Hadoop side tests to prepare for removing 
(kai.zheng: rev ea0c2b8b051a2d14927e8f314245442f30748dc8)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCallBenchmark.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/server/HSAdminServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestMultipleProtocolServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* (edit) hadoop-common-project/hadoop-common/src/test/proto/test.proto
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java


> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489245#comment-15489245
 ] 

Hadoop QA commented on HADOOP-13573:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
53s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13573 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828377/HADOOP-13573-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5049df43010d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 18f1f68 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10502/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10502/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13573-HADOOP-13345.002.patch, 
> HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the 

[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489128#comment-15489128
 ] 

Wei Zhou commented on HADOOP-13218:
---

The timeout issue of org.apache.hadoop.http.TestHttpServerLifecycle test  
reported by Jenkins has nothing to do with this patch, it passed in my local 
machine. I have done wide tests on this patch including unit tests and 
functional tests in real cluster. The functional tests are done by running 
build-in benchmarks like TestDFSIO, terasort, etc. Thanks!

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489160#comment-15489160
 ] 

Hudson commented on HADOOP-13598:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10439 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10439/])
HADOOP-13598. Add eol=lf for unix format files in .gitattributes. (aajisaka: 
rev 696bc0e0abe57051470da0cbd539ddf21cc60da9)
* (edit) .gitattributes


> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: Windows, newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13218:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks [~zhouwei] for the update and complete tests!
+1 and committed to trunk.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader cannot handle multibyte delimiters correctly

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13192:
-
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial).

> org.apache.hadoop.util.LineReader cannot handle multibyte delimiters correctly
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>Assignee: binde
>Priority: Critical
> Fix For: 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: 
> 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, 
> 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch, 
> HADOOP-13192.final.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11301:
-
Fix Version/s: 2.6.5

Cherry-picked to 2.6.5 (trivial).

> [optionally] update jmx cache to drop old metrics
> -
>
> Key: HADOOP-11301
> URL: https://issues.apache.org/jira/browse/HADOOP-11301
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maysam Yabandeh
>Assignee: Maysam Yabandeh
> Fix For: 2.7.0, 2.6.5
>
> Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
> HADOOP-11301.v03.patch, HADOOP-11301.v04.patch
>
>
> MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
> new metric is added since last time:
> {code}
>   int oldCacheSize = attrCache.size();
>   int newCacheSize = updateAttrCache();
>   if (oldCacheSize < newCacheSize) {
> updateInfoCache();
>   }
> {code}
> This behavior is not desirable in some applications. For example nntop 
> (HDFS-6982) reports the top users via jmx. The list is updated after each 
> report. The previously reported top users hence should be removed from the 
> cache upon each report request.
> In our production run of nntop we made a change to ignore the size check and 
> always perform updateInfoCache. I am planning to submit a patch including 
> this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-13 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13610 started by Genmao Yu.
--
> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-13 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu reassigned HADOOP-13610:
--

Assignee: Genmao Yu

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13598:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution!

> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: Windows, newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13573:
--
Attachment: HADOOP-13573-HADOOP-13345.002.patch

Attaching v2 patch with fixed filename.  Changes from v1:

Removed unnecessary contstructors.

Add integrity testing for more FileStatus fields.

Fixed "how not to use try-with-resources".

Added testPutOverwrite() that replaces an existing entry, gets it back out, and
sanity-checks contents.

Removed trailing slashes on directory paths.

Added comment about testRecursiveRoot() being optional.

Added separate test for deleting non-existing file and recursive dir.

> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13573-HADOOP-13345.002.patch, 
> HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12482) Race condition in JMX cache update

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12482:
-
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial).

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-13 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13164:
--
Status: Open  (was: Patch Available)

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13164.branch-2-002.patch, 
> HADOOP-13164.branch-2.WIP.002.patch, HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12348:
-
Fix Version/s: 2.6.5

Cherry-picked to 2.6.5 (trivial).

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-09-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11361:
-
Target Version/s: 2.7.3, 2.8.0, 2.6.5  (was: 2.8.0, 2.7.3, 2.6.5)
   Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5. Picked also HADOOP-11301, HADOOP-12348, and 
HADOOP-12482 before this.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: supportability
> Fix For: 2.8.0, 2.9.0, 2.6.5, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361-007.patch, HADOOP-11361-009.patch, 
> HADOOP-11361.008.patch, HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489463#comment-15489463
 ] 

Hadoop QA commented on HADOOP-13164:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_101 with JDK 
v1.8.0_101 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13164 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828382/HADOOP-13164-branch-2-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a65ff52f5f8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 3f36ac9 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 

[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-13 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13164:
--
Attachment: HADOOP-13164-branch-2-003.patch

Thanks [~ste...@apache.org]. I have rebased the patch for branch-2.

S3 Test results
{noformat}
Results :
Tests run: 296, Failures: 0, Errors: 0, Skipped: 5
{noformat}

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13164-branch-2-003.patch, 
> HADOOP-13164.branch-2-002.patch, HADOOP-13164.branch-2.WIP.002.patch, 
> HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-13 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13164:
--
Status: Patch Available  (was: Open)

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13164-branch-2-003.patch, 
> HADOOP-13164.branch-2-002.patch, HADOOP-13164.branch-2.WIP.002.patch, 
> HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486817#comment-15486817
 ] 

Hadoop QA commented on HADOOP-13598:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13598 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828196/HADOOP-13598.001.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux b50f02106a63 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f0876b8 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10491/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Attachment: HADOOP-13599-branch-2-002.patch

Patch 002. This adds a fairly low-value test case to close an instance twice; 
this verifies that nothing visibly fails on that second call; re-entrancy is 
still something which will need manual review.

test endpoint: s3a ireland

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486870#comment-15486870
 ] 

Hadoop QA commented on HADOOP-13599:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13599 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828199/HADOOP-13599-branch-2-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c01e9b44de6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 8c8ff0c |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 

[jira] [Updated] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13598:
---
Labels: Windows newbie  (was: newbie)

LGTM, +1.

> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: Windows, newbie
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486689#comment-15486689
 ] 

Steve Loughran commented on HADOOP-10075:
-

Jersey has been great to work with: the first embedded servlet engine. It did 
have problems, ones we had got familiar with (DNs used to have to probe on 
startup to see if jersey was actually serving requests, if it hadn't started 
properly they'd terminate). Things like that kept us on an old jetty version 
for a long time.



> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-09-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486699#comment-15486699
 ] 

Steve Loughran commented on HADOOP-12981:
-

OK, I've misread this.

s3n is the FS we still ship. s3:// is the obsolete one. s3n must not be touched.

-1

also, all patches against hadoop-aws have to go through the process of 
submitted declaring which s3 endpoint they ran against. Please get into the 
habit of this, as its how we deal with jenkins's lack of testing, and identify 
issues with different endpoints (Frankfurt, Seoul).

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch, HADOOP-12981.002.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Attachment: HADOOP-13599-branch-2-001.patch

Patch 001: move to an atomic boolean for the close check

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Status: Patch Available  (was: Open)

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13598:
---
Attachment: HADOOP-13598.001.patch

> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13598) Add eol=lf for unix format files in .gitattributes

2016-09-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13598:
---
Status: Patch Available  (was: Open)

Attach a initial patch to address the comment that [~ajisakaa] has mentioned in 
HDFS-10856.

> Add eol=lf for unix format files in .gitattributes
> --
>
> Key: HADOOP-13598
> URL: https://issues.apache.org/jira/browse/HADOOP-13598
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
> Attachments: HADOOP-13598.001.patch
>
>
> In .gitattributes, eol=crlf is set for .cmd and .bat, but eol=lf is not set 
> for .java, .html, and so on. Setting eol=lf would be great for developers 
> using Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486976#comment-15486976
 ] 

Hadoop QA commented on HADOOP-13164:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-13164 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13164 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828204/HADOOP-13164.branch-2-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10493/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13164.branch-2-002.patch, 
> HADOOP-13164.branch-2.WIP.002.patch, HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >