[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Attachment: HADOOP-13529-HADOOP-12756.003.patch

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch, 
> HADOOP-13529-HADOOP-12756.002.patch, HADOOP-13529-HADOOP-12756.003.patch
>
>
> 1. argument and variant naming
> 2. abstract utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436330#comment-15436330
 ] 

Hadoop QA commented on HADOOP-13529:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
36 new + 0 unchanged - 0 fixed = 36 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825410/HADOOP-13529-HADOOP-12756.002.patch
 |
| JIRA Issue | HADOOP-13529 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 8343c1e1ad83 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / aff1841 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10365/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10365/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10365/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
>   

[jira] [Commented] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node

2016-08-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436320#comment-15436320
 ] 

Xiao Chen commented on HADOOP-13539:


Thanks Andrew for the review!

[~asuresh], I see from history this was done by Tucu, but do you have any 
suggestions?
I plan to commit this EOB Thursday, if no objections by then.

> KMS's zookeeper-based secret manager should be consistent when failed to 
> remove node
> 
>
> Key: HADOOP-13539
> URL: https://issues.apache.org/jira/browse/HADOOP-13539
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13539.01.patch
>
>
> In {{ZKDelegationTokenSecretManager}}, the 2 methods 
> {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet 
> handles exception differently. We should not throw RTE if a node cannot be 
> removed - logging is enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Attachment: HADOOP-13529-HADOOP-12756.002.patch

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch, 
> HADOOP-13529-HADOOP-12756.002.patch
>
>
> 1. argument and variant naming
> 2. abstract utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Attachment: (was: HADOOP-13529-HADOOP-12756.002.patch)

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. abstract utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initialize code

2016-08-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436307#comment-15436307
 ] 

Hudson commented on HADOOP-13534:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10341 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10341/])
HADOOP-13534. Remove unused TrashPolicy#getInstance and initialize code. 
(aajisaka: rev ab3b727b5f1511ea05f75d8798eaf85f6defcf53)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java


> Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initialize}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node

2016-08-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436287#comment-15436287
 ] 

Andrew Wang commented on HADOOP-13539:
--

+1 from me, though we might want to get a second check from [~asuresh] that 
this isn't intended behavior.

> KMS's zookeeper-based secret manager should be consistent when failed to 
> remove node
> 
>
> Key: HADOOP-13539
> URL: https://issues.apache.org/jira/browse/HADOOP-13539
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13539.01.patch
>
>
> In {{ZKDelegationTokenSecretManager}}, the 2 methods 
> {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet 
> handles exception differently. We should not throw RTE if a node cannot be 
> removed - logging is enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436286#comment-15436286
 ] 

Hadoop QA commented on HADOOP-13529:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-tools_hadoop-aliyun generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825405/HADOOP-13529-HADOOP-12756.002.patch
 |
| JIRA Issue | HADOOP-13529 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 6a8e8cd418f2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / aff1841 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10364/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10364/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10364/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10364/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 

[jira] [Updated] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initialize code

2016-08-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13534:
---
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~linyiqun] for the contribution.

> Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initialize}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initialize code

2016-08-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13534:
---
Target Version/s: 3.0.0-alpha2
Release Note: TrashPolicy#getInstance and initialize with Path were 
removed. Use the method without Path instead.
 Description: A follow-on from HDFS-8831: now the {{getInstance}} and 
{{initialize}} APIs with Path is not used anymore.  (was: A follow-on from 
HDFS-8831: now the {{getInstance}} and {{initiate}} APIs with Path is not used 
anymore.)
 Summary: Remove unused TrashPolicy#getInstance and initialize code 
 (was: Remove unused TrashPolicy#getInstance and initiate code)

> Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initialize}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. abstract utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
{code}
bug in copyDir
{code}
8. add some unit test

  was:
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
{code}
bug in copyDir
{code}
8. add some unit test


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch, 
> HADOOP-13529-HADOOP-12756.002.patch
>
>
> 1. argument and variant naming
> 2. abstract utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Attachment: HADOOP-13529-HADOOP-12756.002.patch

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch, 
> HADOOP-13529-HADOOP-12756.002.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
{code}
bug in copyDir
{code}
8. add some unit test

  was:
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
{code}
bug in copyDir
{code}


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code

2016-08-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436141#comment-15436141
 ] 

Akira Ajisaka commented on HADOOP-13534:


+1, thanks Yiqun.

> Remove unused TrashPolicy#getInstance and initiate code
> ---
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
{code}
bug in copyDir
{code}

  was:
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
7.1 bug in copyDir


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets
7. some bug fix
7.1 bug in copyDir

  was:
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> 7.1 bug in copyDir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436007#comment-15436007
 ] 

Hadoop QA commented on HADOOP-13546:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 54 unchanged - 4 fixed = 54 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825373/HADOOP-13546-HADOOP-13436.001.patch
 |
| JIRA Issue | HADOOP-13546 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9bbae62f7903 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1f3293 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10363/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10363/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> 

[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2016-08-24 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435984#comment-15435984
 ] 

Duo Zhang commented on HADOOP-13433:


Any comments? Thanks.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46)
> ... 31 more​
> {noformat}
> It rarely happens, but if it happens, the regionserver will be stuck and can 
> never recover.
> Recently we added a log after 

[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.001.patch

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: (was: HADOOP-13546-HADOOP-13436.001.patch)

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.001.patch

v001 fixed some comments. 

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435943#comment-15435943
 ] 

Hadoop QA commented on HADOOP-13546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
47s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 54 unchanged - 4 fixed = 57 total (was 58) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825357/HADOOP-13546-HADOOP-13436.000.patch
 |
| JIRA Issue | HADOOP-13546 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2e074bc1da34 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1f3293 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/patch-compile-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10362/artifact/patchprocess/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Comment Edited] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435897#comment-15435897
 ] 

Xiaobing Zhou edited comment on HADOOP-13546 at 8/24/16 11:03 PM:
--

I posted initial patch v000. This should target 2.x. Please kindly review it, 
thanks.


was (Author: xiaobingo):
I posted initial patch v000. This should target 2.x.

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Status: Patch Available  (was: Open)

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435897#comment-15435897
 ] 

Xiaobing Zhou commented on HADOOP-13546:


I posted initial patch v000. This should target 2.x.

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.000.patch

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13508) FsPermission's string constructor fails on valid permissions like "1777"

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435893#comment-15435893
 ] 

Hadoop QA commented on HADOOP-13508:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 520 new + 538 unchanged - 5 fixed = 1058 total (was 543) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825350/HADOOP-13508.003.patch
 |
| JIRA Issue | HADOOP-13508 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7e4e7cabdf45 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1f3293 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10361/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10361/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10361/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10361/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FsPermission's string constructor fails on valid permissions like "1777"
> 

[jira] [Updated] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-08-24 Thread Thomas Poepping (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Poepping updated HADOOP-13344:
-
Attachment: HADOOP-13344.01.patch

This patch targets trunk. Moves slf4j-related jars to a subdirectory, which 
will not be picked up by a wildcard. The slf4j-related jars can be optionally 
excluded on the classpath by setting the HADOOP_USE_BUILTIN_SLF4J_BINDING 
option to false.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13543) [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues

2016-08-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435854#comment-15435854
 ] 

Andrew Wang commented on HADOOP-13543:
--

Forgot to mention, JIRA is at YETUS-445.

> [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues
> --
>
> Key: HADOOP-13543
> URL: https://issues.apache.org/jira/browse/HADOOP-13543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> Now that we have fixed JDiff report generation for 2.8.0 and above, we should 
> analyse them.
> For the previous releases, I was applying the jdiff patches myself, and 
> analysed them offline. It's better to track them here now that the reports 
> are automatically getting generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13543) [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues

2016-08-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435851#comment-15435851
 ] 

Andrew Wang commented on HADOOP-13543:
--

FWIW I'm working on a wrapper for Java ACC that provides more user-friendly API 
reports than JDiff. My WIP patch should already be usable.

> [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues
> --
>
> Key: HADOOP-13543
> URL: https://issues.apache.org/jira/browse/HADOOP-13543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> Now that we have fixed JDiff report generation for 2.8.0 and above, we should 
> analyse them.
> For the previous releases, I was applying the jdiff patches myself, and 
> analysed them offline. It's better to track them here now that the reports 
> are automatically getting generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Description: 
Override #equals and #hashcode to ensure multiple instances are equivalent. 
They eventually
share the same RPC connection given the other arguments of constructing 
ConnectionId are
the same.

  was:
Implement #equals and #hashcode to ensure multiple instances are equivalent. 
They eventually
share the same RPC connection given the other arguments of constructing 
ConnectionId are
the same.


> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail implement equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Summary: Have TryOnceThenFail implement equals and hashCode  (was: Have 
TryOnceThenFail implement hashCode and equals)

> Have TryOnceThenFail implement equals and hashCode
> --
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Implement #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail override equals and hashCode

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Summary: Have TryOnceThenFail override equals and hashCode  (was: Have 
TryOnceThenFail implement equals and hashCode)

> Have TryOnceThenFail override equals and hashCode
> -
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Implement #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Have TryOnceThenFail implement hashCode and equals

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Description: 
Implement #equals and #hashcode to ensure multiple instances are equivalent. 
They eventually
share the same RPC connection given the other arguments of constructing 
ConnectionId are
the same.

> Have TryOnceThenFail implement hashCode and equals
> --
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Implement #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13546) Have TryOnceThenFail implement hashCode and equals

2016-08-24 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-13546:
--

 Summary: Have TryOnceThenFail implement hashCode and equals
 Key: HADOOP-13546
 URL: https://issues.apache.org/jira/browse/HADOOP-13546
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 2.7.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission's string constructor fails on valid permissions like "1777"

2016-08-24 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13508:
---
Attachment: HADOOP-13508.003.patch

The {{UmaskParser}} also includes symbolic patterns for {{FsPermission}}, not 
just the short values supported by v2. That said, the parser doesn't seem to 
recognize the sticky bit in the symbolic representation, either. The regexp 
parser framework seems pretty heavy, but the bug fix can precede a rewrite.

Also updated the unit test to check for the sticky bit in octal and to verify 
the symbolic constructors that work.

> FsPermission's string constructor fails on valid permissions like "1777"
> 
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, 
> HADOOP-13508.003.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13488:
---
Affects Version/s: 2.7.2

> Have TryOnceThenFail implement ConnectionRetryPolicy
> 
>
> Key: HADOOP-13488
> URL: https://issues.apache.org/jira/browse/HADOOP-13488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13488.000.patch, HADOOP-13488.001.patch
>
>
> As the most commonly used default or fallback policy, TryOnceThenFail is 
> often used both RetryInvocationHandler and connection level. As proposed in 
> HADOOP-13436, it should implement ConnectionRetryPolicy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13436) RPC connections are leaking due to not overriding hashCode and equals

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13436:
---
Attachment: Proposal-of-Fixing-Connection-Leakage.pdf

> RPC connections are leaking due to not overriding hashCode and equals
> -
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: Proposal-of-Fixing-Connection-Leakage.pdf, repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId #hashCode and then #equals, which compose checking 
> Subclass-of-RetryPolicy #hashCode and #equals. If subclasses of RetryPolicy 
> neglect overriding #hashCode or #equals, every instance of RetryPolicy with 
> equivalent fields' values (e.g. MultipleLinearRandomRetry[6x1ms, 
> 10x6ms]) will lead to a brand new connection because the check will fall 
> back to Object#hashCode and Object#equals which is distinct and false for 
> distinct instances.
> This is stack trace where the anonymous RetryPolicy implementation 
> (neglecting overriding hashCode and equals) in 
> RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at 

[jira] [Commented] (HADOOP-13436) RPC connections are leaking due to not overriding hashCode and equals

2016-08-24 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435781#comment-15435781
 ] 

Xiaobing Zhou commented on HADOOP-13436:


Based on considerate discussion, we came up with different fixes for 2.x and 
3.x. Simply put, one is to do in-place fix for Hadoop 2.x, the other is to 
introduce ConnectionRetryPolicy for Hadoop 3.x, since ConnectionRetryPolicy 
needs fundamental changes of function signatures in the space of ipc.RPC and 
ipc.Client, which will break back compatibility in 2.x. I posted a proposal. 
Any feedback is appreciated. Thanks [~jingzhao] so your precious input.

> RPC connections are leaking due to not overriding hashCode and equals
> -
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId #hashCode and then #equals, which compose checking 
> Subclass-of-RetryPolicy #hashCode and #equals. If subclasses of RetryPolicy 
> neglect overriding #hashCode or #equals, every instance of RetryPolicy with 
> equivalent fields' values (e.g. MultipleLinearRandomRetry[6x1ms, 
> 10x6ms]) will lead to a brand new connection because the check will fall 
> back to Object#hashCode and Object#equals which is distinct and false for 
> distinct instances.
> This is stack trace where the anonymous RetryPolicy implementation 
> (neglecting overriding hashCode and equals) in 
> RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
> at 
> 

[jira] [Updated] (HADOOP-13436) RPC connections are leaking due to not overriding hashCode and equals

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13436:
---
Summary: RPC connections are leaking due to not overriding hashCode and 
equals  (was: RPC connections are leaking due to missing equals override in 
RetryUtils#getDefaultRetryPolicy)

> RPC connections are leaking due to not overriding hashCode and equals
> -
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId #hashCode and then #equals, which compose checking 
> Subclass-of-RetryPolicy #hashCode and #equals. If subclasses of RetryPolicy 
> neglect overriding #hashCode or #equals, every instance of RetryPolicy with 
> equivalent fields' values (e.g. MultipleLinearRandomRetry[6x1ms, 
> 10x6ms]) will lead to a brand new connection because the check will fall 
> back to Object#hashCode and Object#equals which is distinct and false for 
> distinct instances.
> This is stack trace where the anonymous RetryPolicy implementation 
> (neglecting overriding hashCode and equals) in 
> RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> 

[jira] [Updated] (HADOOP-13436) RPC connections are leaking due to missing equals override in RetryUtils#getDefaultRetryPolicy

2016-08-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13436:
---
Description: 
We've noticed RPC connections are increasing dramatically in a Kerberized HDFS 
cluster with {noformat}dfs.client.retry.policy.enabled{noformat} enabled. 
Internally,  Client#getConnection is doing lookup relying on ConnectionId 
#hashCode and then #equals, which compose checking Subclass-of-RetryPolicy 
#hashCode and #equals. If subclasses of RetryPolicy neglect overriding 
#hashCode or #equals, every instance of RetryPolicy with equivalent fields' 
values (e.g. MultipleLinearRandomRetry[6x1ms, 10x6ms]) will lead to a 
brand new connection because the check will fall back to Object#hashCode and 
Object#equals which is distinct and false for distinct instances.

This is stack trace where the anonymous RetryPolicy implementation (neglecting 
overriding hashCode and equals) in RetryUtils#getDefaultRetryPolicy is called:
{noformat}
at 
org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
at 
org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
at 
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
at 
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
at 
org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at 
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
{noformat}


Three options to fix the problem:
1. All subclasses of RetryPolicy must override equals and hashCode to deliver 
less discriminating equivalence relation, i.e. they are equal if they have 
meaningful equivalent fields' values (e.g. MultipleLinearRandomRetry[6x1ms, 
10x6ms])
2. Change ConnectionId#equals by removing RetryPolicy#equals compoment.
3. Let WebHDFS reuse the DFSClient.

  was:

[jira] [Comment Edited] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435666#comment-15435666
 ] 

Giovanni Matteo Fumarola edited comment on HADOOP-13545 at 8/24/16 8:55 PM:


I did run locally all the tests that use HSQLDB and all of them work fine.
[~ste...@apache.org] Can you take a look?


was (Author: giovanni.fumarola):
I did run locally all the tests that use HSQLDB and all of them works fine.
[~ste...@apache.org] Can you take a look?

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-13545.v1.patch
>
>
> Upgrade HSQLDB from 2.0.0 to 2.3.4.
> Version 2.3.4 is fully multithreaded and supports high performance 2PL and 
> MVCC (multiversion concurrency control) transaction control models.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435666#comment-15435666
 ] 

Giovanni Matteo Fumarola commented on HADOOP-13545:
---

I did run locally all the tests that use HSQLDB and all of them works fine.
[~ste...@apache.org] Can you take a look?

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-13545.v1.patch
>
>
> Upgrade HSQLDB from 2.0.0 to 2.3.4.
> Version 2.3.4 is fully multithreaded and supports high performance 2PL and 
> MVCC (multiversion concurrency control) transaction control models.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2016-08-24 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved HADOOP-12516.
-
Resolution: Duplicate

This is already fixed by HADOOP-13428, closing as dup.

> jdiff fails with error 'duplicate comment id' about 
> MetricsSystem.register_changed
> --
>
> Key: HADOOP-12516
> URL: https://issues.apache.org/jira/browse/HADOOP-12516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> "mvn package -Pdist,docs -DskipTests" fails with following error. It looks 
> like jdiff problem as Li Lu mentioned on HADOOP-11776.
> {quote}
>   [javadoc] ExcludePrivateAnnotationsJDiffDoclet
>   [javadoc] JDiff: doclet started ...
>   [javadoc] JDiff: reading the old API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
>  API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
> the file 'Apache_Hadoop_Common_2.6.0.xml'
>   ...
>   [javadoc] JDiff: reading the new API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
>  incorrectly formatted @link in text: Options to be used by the \{@link 
> Find\} command and its \{@link Expression\}s.
>   
>   [javadoc] Error: duplicate comment id: 
> org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
> java.lang.String, T)
> {quote}
> A link to the comment by Li lu is [here| 
> https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13425) IPC layer optimizations

2016-08-24 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435636#comment-15435636
 ] 

Kihwal Lee commented on HADOOP-13425:
-

[~daryn], are you going to do further work under this jira?  Otherwise I will 
see if the jiras can be put in 2.8.

> IPC layer optimizations
> ---
>
> Key: HADOOP-13425
> URL: https://issues.apache.org/jira/browse/HADOOP-13425
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Umbrella jira for y! optimizations to reduce object allocations, more 
> efficiently use protobuf APIs, unified ipc and webhdfs callq to enable QoS, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435625#comment-15435625
 ] 

Hadoop QA commented on HADOOP-13545:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825329/HADOOP-13545.v1.patch 
|
| JIRA Issue | HADOOP-13545 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 7559accc22a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3476156 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10360/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10360/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-13545.v1.patch
>
>
> Upgrade HSQLDB from 2.0.0 to 2.3.4.
> Version 2.3.4 is fully multithreaded and supports high performance 2PL and 
> MVCC (multiversion concurrency control) transaction control models.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-13545:
--
Description: 
Upgrade HSQLDB from 2.0.0 to 2.3.4.
Version 2.3.4 is fully multithreaded and supports high performance 2PL and MVCC 
(multiversion concurrency control) transaction control models.

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-13545.v1.patch
>
>
> Upgrade HSQLDB from 2.0.0 to 2.3.4.
> Version 2.3.4 is fully multithreaded and supports high performance 2PL and 
> MVCC (multiversion concurrency control) transaction control models.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-13545:
--
Priority: Minor  (was: Major)

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-13545.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-13545:
--
Status: Patch Available  (was: Open)

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
> Attachments: HADOOP-13545.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-13545:
--
Attachment: HADOOP-13545.v1.patch

> Upgrade HSQLDB to 2.3.4
> ---
>
> Key: HADOOP-13545
> URL: https://issues.apache.org/jira/browse/HADOOP-13545
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
> Attachments: HADOOP-13545.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13545) Upgrade HSQLDB to 2.3.4

2016-08-24 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created HADOOP-13545:
-

 Summary: Upgrade HSQLDB to 2.3.4
 Key: HADOOP-13545
 URL: https://issues.apache.org/jira/browse/HADOOP-13545
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException

2016-08-24 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435572#comment-15435572
 ] 

Kihwal Lee commented on HADOOP-12726:
-

I think this broke MAPREDUCE-6767.

> Unsupported FS operations should throw UnsupportedOperationException
> 
>
> Key: HADOOP-12726
> URL: https://issues.apache.org/jira/browse/HADOOP-12726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12726.001.patch, HADOOP-12726.002.patch, 
> HADOOP-12726.003.patch
>
>
> In the {{FileSystem}} implementation classes, unsupported operations throw 
> {{new IOException("Not supported")}}, which makes it needlessly difficult to 
> distinguish an actual error from an unsupported operation.  They should 
> instead throw {{new UnsupportedOperationException()}}.
> It's possible that this anti-idiom is used elsewhere in the code base.  This 
> JIRA should include finding and cleaning up those instances as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2016-08-24 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435538#comment-15435538
 ] 

Chen He edited comment on HADOOP-9565 at 8/24/16 7:46 PM:
--

Hi [~steve_l], thank you for spending time on my question. The new version of 
FileOutputCommitter has algorithm 2 which does not have serial rename of all 
tasks in commitJob. Just find the parameter. It should resolve our problem. 


was (Author: airbots):
Hi [~steve_l], thank you for spending time on my question. The new version of 
FileOutputCommitter has algorithm 2 which does not have serial rename of all 
task in commitJob. Just find the parameter. It should resolve our problem. 

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Pieter Reuse
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-08-24 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created HADOOP-13544:


 Summary: JDiff reports unncessarily show unannotated APIs and 
cause confusion while our javadocs only show annotated and public APIs
 Key: HADOOP-13544
 URL: https://issues.apache.org/jira/browse/HADOOP-13544
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker


Our javadocs only show annotated and @Public APIs (original JIRAs HADOOP-7782, 
HADOOP-6658).

But the jdiff shows all APIs that are not annotated @Private. This causes 
confusion on how we read the reports and what APIs we really broke.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2016-08-24 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435538#comment-15435538
 ] 

Chen He commented on HADOOP-9565:
-

Hi [~steve_l], thank you for spending time on my question. The new version of 
FileOutputCommitter has algorithm 2 which does not have serial rename of all 
task in commitJob. Just find the parameter. It should resolve our problem. 

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Pieter Reuse
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13423) Run JDiff on trunk for Hadoop-Common and analyze results

2016-08-24 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-13423:
-
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13543

> Run JDiff on trunk for Hadoop-Common and analyze results
> 
>
> Key: HADOOP-13423
> URL: https://issues.apache.org/jira/browse/HADOOP-13423
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: 3.0.0-alpha1-hadoop-common-jdiff.zip, 
> 3.0.0-alpha1-jdiff.zip
>
>
> We need to run JDiff and make sure the first 3.0.0 alpha release doesn't 
> include unnecessary API incompatible change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13543) [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues

2016-08-24 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created HADOOP-13543:


 Summary: [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports 
and fix any issues
 Key: HADOOP-13543
 URL: https://issues.apache.org/jira/browse/HADOOP-13543
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker


Now that we have fixed JDiff report generation for 2.8.0 and above, we should 
analyse them.

For the previous releases, I was applying the jdiff patches myself, and 
analysed them offline. It's better to track them here now that the reports are 
automatically getting generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Allow pluggable audit loggers in KMS

2016-08-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435447#comment-15435447
 ] 

Hudson commented on HADOOP-13396:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10338 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10338/])
HADOOP-13396. Allow pluggable audit loggers in KMS. Contributed by Xiao (xiao: 
rev 3476156807733505746951f0c93465927422)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAudit.java
* (add) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuditLogger.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
* (add) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/SimpleKMSAuditLogger.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/resources/log4j-kmsaudit.properties
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java
* (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java


> Allow pluggable audit loggers in KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch, 
> HADOOP-13396.09.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13396) Allow pluggable audit loggers in KMS

2016-08-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13396:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk and branch-2. There're trivial conflicts 
backporting to branch-2 due to HADOOP-12615. Compiled and passed TestKMSAudit 
before pushing.

Thanks Allen, Sean, Wei-Chiu and Andrew for the discussions and reviews! Json 
logger will be done via HADOOP-13523.

> Allow pluggable audit loggers in KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch, 
> HADOOP-13396.09.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-24 Thread Albert Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435310#comment-15435310
 ] 

Albert Chu commented on HADOOP-13533:
-

Would it be better to make HADOOP_SSH_OPTS default to empty string?  I believe 
this was the case in Hadoop 2.X.

If say a user has configured everything in .ssh/config, they have to knowingly 
set HADOOP_SSH_OPTS to empty string to force it to be empty.  IMO this is the 
opposite of what most users would think.  Most would think they have to set 
HADOOP_SSH_OPTS if they need to.

> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the 
> {noformat}
> ":-"
> {noformat}
> in the above should be
> {noformat}
> "-"
> {noformat}
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13542) Immutable Configuration

2016-08-24 Thread Renan Vicente Gomes da Silva (JIRA)
Renan Vicente Gomes da Silva created HADOOP-13542:
-

 Summary: Immutable Configuration
 Key: HADOOP-13542
 URL: https://issues.apache.org/jira/browse/HADOOP-13542
 Project: Hadoop Common
  Issue Type: New Feature
  Components: conf
Reporter: Renan Vicente Gomes da Silva
Priority: Minor


Currently the configuration settings can be changed using 
org.apache.hadoop.conf.Configuration like changing java opts. It would be nice 
to have a property immutable to forbid changes to java opts from code.

This would help when a company has different teams and you want to have a 
template with java opts that cannot be changed from code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435204#comment-15435204
 ] 

Allen Wittenauer commented on HADOOP-13533:
---

I think this makes total sense.  If a user has configured, say, .ssh/config to 
do what it needs to do then there's no point in configuring HADOOP_SSH_OPTS.  
Additionally, if someone has replaced the ssh functionality in the shell code, 
then this may have different requirements.

> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the 
> {noformat}
> ":-"
> {noformat}
> in the above should be
> {noformat}
> "-"
> {noformat}
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13533:
--
Assignee: Albert Chu

> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the 
> {noformat}
> ":-"
> {noformat}
> in the above should be
> {noformat}
> "-"
> {noformat}
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13533:
--
Status: Patch Available  (was: Open)

> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the 
> {noformat}
> ":-"
> {noformat}
> in the above should be
> {noformat}
> "-"
> {noformat}
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435196#comment-15435196
 ] 

Allen Wittenauer edited comment on HADOOP-13532 at 8/24/16 4:19 PM:


Doh. Dumb mistake on my part. :)

EDIT:

oh wait, that's not my shell code for once. Hooray!


was (Author: aw):
Doh. Dumb mistake on my part. :)

> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435196#comment-15435196
 ] 

Allen Wittenauer commented on HADOOP-13532:
---

Doh. Dumb mistake on my part. :)

> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13532:
--
Assignee: Albert Chu

> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Assignee: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13532:
--
Status: Patch Available  (was: Open)

> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-08-24 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435167#comment-15435167
 ] 

Thomas Poepping commented on HADOOP-13344:
--

Terribly sorry guys, I have a different diff now that should clear a few things 
up. I'll get that uploaded today and we can continue the conversation from 
there.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435108#comment-15435108
 ] 

Hadoop QA commented on HADOOP-13541:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
47s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
25s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825273/HADOOP-13541-branch-2.8-001.patch
 |
| JIRA Issue | HADOOP-13541 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 53b90cf99385 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 

[jira] [Commented] (HADOOP-13527) Add Spark to CallerContext LimitedPrivate scope

2016-08-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435093#comment-15435093
 ] 

Allen Wittenauer commented on HADOOP-13527:
---

How many more things need to get added before we declare defeat though? 
Ignoring the internal bits, we've already got  HBase, Hive, Pig, and Spark.


> Add Spark to CallerContext LimitedPrivate scope
> ---
>
> Key: HADOOP-13527
> URL: https://issues.apache.org/jira/browse/HADOOP-13527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Weiqing Yang
>Assignee: Weiqing Yang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13527.000.patch
>
>
> A lots of Spark applications run on Hadoop. Spark will invoke Hadoop caller 
> context APIs to set up its caller contexts in HDFS/Yarn, so Hadoop should add 
> Spark as one of the users in the LimitedPrivate scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13541:

Status: Patch Available  (was: Open)

tested with branch-2.8 against s3a ireland

> explicitly declare the Joda time version S3A depends on
> ---
>
> Key: HADOOP-13541
> URL: https://issues.apache.org/jira/browse/HADOOP-13541
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13541-branch-2.8-001.patch
>
>
> Different builds of Hadoop are pulling in wildly different versions of Joda 
> time, depending on what other transitive dependencies are involved. Example: 
> 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
> 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
> and find a dependency has got older
> I propose explicitly declaring a dependency on joda-time in s3a, then set the 
> version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13541:

Attachment: HADOOP-13541-branch-2.8-001.patch

Patch against branch-2.8; it's the first one lagging the 2.7.3 version.

we should also consider applying this to branch-2.7, to retain version control 
there in the face of any upgrade of dependencies

> explicitly declare the Joda time version S3A depends on
> ---
>
> Key: HADOOP-13541
> URL: https://issues.apache.org/jira/browse/HADOOP-13541
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13541-branch-2.8-001.patch
>
>
> Different builds of Hadoop are pulling in wildly different versions of Joda 
> time, depending on what other transitive dependencies are involved. Example: 
> 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
> 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
> and find a dependency has got older
> I propose explicitly declaring a dependency on joda-time in s3a, then set the 
> version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13540:

Priority: Minor  (was: Major)

> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13540-001.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-08-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13541:
---

 Summary: explicitly declare the Joda time version S3A depends on
 Key: HADOOP-13541
 URL: https://issues.apache.org/jira/browse/HADOOP-13541
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 2.8.0, 2.7.3
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


Different builds of Hadoop are pulling in wildly different versions of Joda 
time, depending on what other transitive dependencies are involved. Example: 
2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
and find a dependency has got older

I propose explicitly declaring a dependency on joda-time in s3a, then set the 
version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434845#comment-15434845
 ] 

Hadoop QA commented on HADOOP-13540:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825263/HADOOP-13540-001.patch
 |
| JIRA Issue | HADOOP-13540 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 44e21a9cb6e5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 092b4d5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10358/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10358/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13540-001.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13540:

Status: Patch Available  (was: Open)

> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13540-001.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13540:

Attachment: HADOOP-13540-001.patch

Patch 001; text emphasising common problems and a sequence of steps to try and 
diagnose things

> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13540-001.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13540:
---

 Summary: improve section on troubleshooting s3a auth problems
 Key: HADOOP-13540
 URL: https://issues.apache.org/jira/browse/HADOOP-13540
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


We should add more on how to go about diagnosing s3a auth problems. 

When it happens, the need to keep the credentials secret makes it hard to 
automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434796#comment-15434796
 ] 

Hadoop QA commented on HADOOP-13534:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} root generated 0 new + 710 unchanged - 1 fixed = 710 
total (was 711) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 81 unchanged - 3 fixed = 81 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825256/HADOOP-13534.002.patch
 |
| JIRA Issue | HADOOP-13534 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b48a7e88e3c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 092b4d5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10357/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10357/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10357/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unused TrashPolicy#getInstance and initiate code
> ---
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
>   

[jira] [Commented] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434740#comment-15434740
 ] 

Hadoop QA commented on HADOOP-13529:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825216/HADOOP-13529-HADOOP-12756.001.patch
 |
| JIRA Issue | HADOOP-13529 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 86ab14d719bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / aff1841 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10356/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10356/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10356/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>

[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Status: Patch Available  (was: In Progress)

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code

2016-08-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13534:
---
Attachment: HADOOP-13534.002.patch

{quote}
Would you rebase the patch?
{quote}
Attach a new patch that rebase the current code.

> Remove unused TrashPolicy#getInstance and initiate code
> ---
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-08-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434542#comment-15434542
 ] 

Steve Loughran commented on HADOOP-13344:
-

grepping inside the JARs scares me. I'm particularly worried about what happens 
if someone sticks an uber JAR on the CP, such as groovy-all or spark-assembly: 
excluding the JAR as it has SL4FJ in runs the risk of removing things. It's 
also going to force a grep of every line of every JAR, if I'm not mistaken, 
which will be expensive.

Can't you just have a variable listing wildcards of JARs to blacklist? Easy to 
test, easy to understand, less hidden intelligence/obscure support calls

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13519) Make Path serializable

2016-08-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434532#comment-15434532
 ] 

Steve Loughran commented on HADOOP-13519:
-

It's actually trying to be pronounceable, "adoof"

> Make Path serializable
> --
>
> Key: HADOOP-13519
> URL: https://issues.apache.org/jira/browse/HADOOP-13519
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13519-branch-2-001.patch, 
> HADOOP-13519-branch-2-002.patch
>
>
> If you could make Hadoop Paths serializable, you can use them in Spark 
> operations without having to convert them to and from URIs.
> It's trivial for paths to support this; as well as the OS code we need to add 
> a check that there's no null URI coming in over the wire, and test to 
> validate round tripping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13536) Clean up IPv6 code

2016-08-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13536:

Summary: Clean up IPv6 code  (was: Clean up code)

> Clean up IPv6 code
> --
>
> Key: HADOOP-13536
> URL: https://issues.apache.org/jira/browse/HADOOP-13536
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Reporter: Elliott Clark
>
> Some code comments came in while discussing merging. We should clean all 
> those up and get everything that's known done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2016-08-24 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434492#comment-15434492
 ] 

Duo Zhang commented on HADOOP-13433:


https://builds.apache.org/job/PreCommit-HADOOP-Build/10355/

The result is green... I do not know why the QA bot said 
TestHttpServerLifecycle timed out...

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46)
> ... 31 more​
> 

[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code

2016-08-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434474#comment-15434474
 ] 

Akira Ajisaka commented on HADOOP-13534:


Thanks [~linyiqun] for fixing HADOOP-13538. Would you rebase the patch?

> Remove unused TrashPolicy#getInstance and initiate code
> ---
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initialize methods with Path in TrashPolicy

2016-08-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13538:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution.
Now we cannot add a contributor to the contributor role because we have hit the 
limit. Sorry for that.

> Deprecate getInstance and initialize methods with Path in TrashPolicy
> -
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch
>
>
> As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not 
> used anymore. We should deprecate these methods before removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initialize methods with Path in TrashPolicy

2016-08-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434432#comment-15434432
 ] 

Hudson commented on HADOOP-13538:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10333 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10333/])
HADOOP-13538. Deprecate getInstance and initialize methods with Path in 
(aajisaka: rev 092b4d5bfd02131d62723fc5673892305eb9fcef)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Deprecate getInstance and initialize methods with Path in TrashPolicy
> -
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch
>
>
> As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not 
> used anymore. We should deprecate these methods before removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434423#comment-15434423
 ] 

Hadoop QA commented on HADOOP-13433:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 96 unchanged - 2 fixed = 96 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825208/HADOOP-13433-v2.patch 
|
| JIRA Issue | HADOOP-13433 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux cf58bceb601d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c37346d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10355/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10355/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10355/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: 

[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initialize methods with Path in TrashPolicy

2016-08-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13538:
---
Summary: Deprecate getInstance and initialize methods with Path in 
TrashPolicy  (was: Deprecate getInstance and initiate methods with Path in 
TrashPolicy)

> Deprecate getInstance and initialize methods with Path in TrashPolicy
> -
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch
>
>
> As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not 
> used anymore. We should deprecate these methods before removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy

2016-08-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434410#comment-15434410
 ] 

Akira Ajisaka commented on HADOOP-13538:


+1, checking this in.

> Deprecate getInstance and initiate methods with Path in TrashPolicy
> ---
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch
>
>
> As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not 
> used anymore. We should deprecate these methods before removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO
6. remove unnecessary commets

  was:
1. argument and variant naming
2. utility class
3. add some comments
4. adjust some configuration
5. fix TODO


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Attachment: HADOOP-13529-HADOOP-12756.001.patch

> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13529) Do some code refactoring

2016-08-24 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13529 started by Genmao Yu.
--
> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> 1. argument and variant naming
> 2. utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434335#comment-15434335
 ] 

Hadoop QA commented on HADOOP-13538:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 49s{color} 
| {color:red} root generated 1 new + 710 unchanged - 0 fixed = 711 total (was 
710) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825201/HADOOP-13538.002.patch
 |
| JIRA Issue | HADOOP-13538 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 805a0b5c3046 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c37346d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10354/artifact/patchprocess/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10354/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10354/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Deprecate getInstance and initiate methods with Path in TrashPolicy
> ---
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: 

[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2016-08-24 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-13433:
---
Attachment: HADOOP-13433-v2.patch

checkstyle...

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46)
> ... 31 more​
> {noformat}
> It rarely happens, but if it happens, the regionserver will be stuck and can 
> never recover.
> Recently we added a log after a 

[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434289#comment-15434289
 ] 

Hadoop QA commented on HADOOP-13538:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 21s{color} 
| {color:red} root generated 1 new + 710 unchanged - 0 fixed = 711 total (was 
710) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825197/HADOOP-13538.002.patch
 |
| JIRA Issue | HADOOP-13538 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 141d1c0b80a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c37346d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/artifact/patchprocess/diff-compile-javac-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Deprecate getInstance and initiate methods with Path in TrashPolicy
> 

[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-08-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434286#comment-15434286
 ] 

Hadoop QA commented on HADOOP-13055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 43 unchanged - 2 fixed = 52 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 34s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825198/HADOOP-13055.02.patch 
|
| JIRA Issue | HADOOP-13055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7ae5c6096915 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c37346d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: 

  1   2   >