[jira] [Commented] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649866#comment-15649866
 ] 

Hadoop QA commented on HADOOP-13800:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13800 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838121/HADOOP-13800.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux d3c0393ec630 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed0beba |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11041/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11041/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13800:
---
Attachment: HADOOP-13800.001.patch

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13800:
---
Status: Patch Available  (was: Open)

Attach a simple patch to have a quick fix. Thanks [~ajisakaa] for reporting.

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HADOOP-13800:
--

Assignee: Yiqun Lin

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649760#comment-15649760
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 2 
fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
34s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . hadoop-client-modules hadoop-client-modules/hadoop-client 
hadoop-client-modules/hadoop-client-api 
hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-runtime hadoop-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 15s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
26s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-13660) Upgrade commons-configuration version

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649712#comment-15649712
 ] 

Hadoop QA commented on HADOOP-13660:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 7 new + 374 unchanged 
- 8 fixed = 381 total (was 382) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-kafka in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649672#comment-15649672
 ] 

John Zhuge commented on HADOOP-12718:
-

TestZKFailoverController failure is unrelated.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649625#comment-15649625
 ] 

Hadoop QA commented on HADOOP-12718:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-12718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838105/HADOOP-12718.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a4bb458d8e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 62d8c17 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  

[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12718:

Attachment: HADOOP-12718.008.patch

Patch 008:
* Use {{AccessDeniedException}} instead of {{AccessControlException}}
* Add constant {{FSExceptionMessages.PERMISSION_DENIED}}
* Please note test {{testPutSrcFileNoPerm}} should not use the constant

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649507#comment-15649507
 ] 

Hadoop QA commented on HADOOP-13720:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 52 unchanged - 24 fixed = 53 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838062/HADOOP-13720.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3242287058de 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1c6ef2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11037/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11037/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11037/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
> 

[jira] [Comment Edited] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649501#comment-15649501
 ] 

Xiao Chen edited comment on HADOOP-13720 at 11/9/16 1:51 AM:
-

Thanks [~yzhangal] for the new rev. +1 from me pending:
{code}
throw new AccessControlException(renewer
  + " tries to renew a token (" + id// < why not 
formatTokenId(id) ? 
  + ") with non-matching renewer " + id.getRenewer());
{code}
(and a new jenkins run)


was (Author: xiaochen):
Thanks [~yzhangal] for the new rev. +1 from me pending:
{code}
throw new AccessControlException(renewer
  + " tries to renew a token (" + id// < why not 
formatTokenId(id) ? 
  + ") with non-matching renewer " + id.getRenewer());
{code}

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649501#comment-15649501
 ] 

Xiao Chen commented on HADOOP-13720:


Thanks [~yzhangal] for the new rev. +1 from me pending:
{code}
throw new AccessControlException(renewer
  + " tries to renew a token (" + id// < why not 
formatTokenId(id) ? 
  + ") with non-matching renewer " + id.getRenewer());
{code}

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649418#comment-15649418
 ] 

Sean Busbey commented on HADOOP-11804:
--

-8

 - rebased to trunk (e1c6ef2)
 - remove patch for HADOOP-13879 now that it's been committed

Still working on the mis-shading mentioned earlier.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649406#comment-15649406
 ] 

Hudson commented on HADOOP-13789:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10794 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10794/])
HADOOP-13789. Hadoop Common includes generated test protos in both jar (wang: 
rev e1c6ef2efa9d87fdfc7474ca63998a13a3929874)
* (add) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocTestMojo.java
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* (add) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/package-info.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
* (add) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocRunner.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* (edit) hadoop-common-project/hadoop-common/pom.xml


> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649403#comment-15649403
 ] 

Hadoop QA commented on HADOOP-13789:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-13789 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838026/HADOOP-13789.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11039/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649401#comment-15649401
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-11804 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-11804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838047/HADOOP-11804.7.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11036/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-08 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649354#comment-15649354
 ] 

Chris Douglas commented on HADOOP-13794:


bq. here's the thread on legal-discuss@
Thanks for the reference.

bq. I'm happy to have VP legal hear additional feedback on the decision.
No thanks. It's a waste of time already; no reason to compound it.

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13789:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for working on this Sean! I've committed this to trunk and branch-2.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649341#comment-15649341
 ] 

Andrew Wang commented on HADOOP-13789:
--

It looks like the build was aborted. Precommit can have a hard time with big 
patches.

Considering v4 just fixes the whitespace nit, I'm going to go ahead and commit 
it. This will hopefully free up HADOOP-11804 too.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649310#comment-15649310
 ] 

Hudson commented on HADOOP-13782:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10793 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10793/])
HADOOP-13782. Make MutableRates metrics thread-local write, (zhz: rev 
77c13c385774c51766fe505397fa916754ac08d4)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcDetailedMetrics.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRatesWithAggregation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableMetricsFactory.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRates.java


> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch, HADOOP-13782.006.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13782:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   2.8.0
   Status: Resolved  (was: Patch Available)

I just committed the patch to trunk~branch-2.7. Thanks Erik for the 
contribution!

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch, HADOOP-13782.006.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649260#comment-15649260
 ] 

Sean Busbey commented on HADOOP-13794:
--

{quote}
This is idiotic.
{quote}

I'm happy to have VP legal hear additional feedback on the decision. [here's 
the thread on 
legal-discuss@|https://lists.apache.org/thread.html/9627a9278d263378a2045d4bffccb6e83b9f01bb783c6dd6fa325faf@%3Clegal-discuss.apache.org%3E]

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649256#comment-15649256
 ] 

Hudson commented on HADOOP-13802:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10792 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10792/])
HADOOP-13802. Make generic options help more consistent, and aligned. (liuml07: 
rev 2a65eb121e23243fcb642d28b3f74241536485d8)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch, HADOOP-13802.4.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649245#comment-15649245
 ] 

Hadoop QA commented on HADOOP-13782:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 21 unchanged - 2 fixed = 23 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838071/HADOOP-13782.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f440f8122d25 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29e3b34 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11034/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11034/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11034/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>

[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649241#comment-15649241
 ] 

John Zhuge commented on HADOOP-12718:
-

Javadoc for {{AccessDeniedException}}:
{noformat}
 * Checked exception thrown when a file system operation is denied, typically
 * due to a file permission or other access check.
 *
 *  This exception is not related to the {@link
 * java.security.AccessControlException AccessControlException} or {@link
 * SecurityException} thrown by access controllers or security managers when
 * access to a file is denied.
{noformat}
{{AccessDeniedException}} seems to be a better choice than 
{{AccessControlException}}. ACE is widely used while ADE is only used in 
hadoop-aws and hadoop-yarn-common, probably because it was introduced in 1.7. 
{{PathPermissionException}} is only used 3 times in tests.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13802:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} and {{branch-2}}. Thanks for your contribution, [~gsohn].

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch, HADOOP-13802.4.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-08 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649218#comment-15649218
 ] 

Chris Douglas commented on HADOOP-13794:


This is idiotic.

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649216#comment-15649216
 ] 

Hadoop QA commented on HADOOP-13720:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
42s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
59s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 52 unchanged - 24 fixed = 53 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 170 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
4s{color} | {color:red} The patch 384 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
19s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838062/HADOOP-13720.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed6d82f966ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29e3b34 |
| Default Java | 1.8.0_101 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11032/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11032/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11032/artifact/patchprocess/patch-compile-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11032/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11032/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 

[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649196#comment-15649196
 ] 

Sean Busbey commented on HADOOP-11804:
--

{quote}
bq. If I wanted to try to refactor out the use of log4j in JobConf, do you 
think that'd be feasible in time for 3.0?
That'd be awesome! If this is just a mechanical replacement of log4j API usage 
with SLF4J, someone might pick it up if you file the JIRA.
{quote}

I didn't look too closely, but I think it's changing how we internally 
represent default log levels for jobs. I'll see if I can succinctly phrase the 
request as a JIRA tomorrow.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649195#comment-15649195
 ] 

Hadoop QA commented on HADOOP-13802:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 30 unchanged - 4 fixed = 30 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838066/HADOOP-13802.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ce58a75a275f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29e3b34 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11033/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11033/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11033/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: 

[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649192#comment-15649192
 ] 

Sean Busbey commented on HADOOP-11804:
--

odd. the precommit build is marked as aborted and a bunch of the referenced 
artifacts didn't get archived.

lemme re-run things.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649187#comment-15649187
 ] 

Andrew Wang commented on HADOOP-11804:
--

bq. If I wanted to try to refactor out the use of log4j in JobConf, do you 
think that'd be feasible in time for 3.0?

That'd be awesome! If this is just a mechanical replacement of log4j API usage 
with SLF4J, someone might pick it up if you file the JIRA.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649180#comment-15649180
 ] 

Hadoop QA commented on HADOOP-11804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
43s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 76 new + 0 unchanged - 0 fixed = 
76 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m  8s{color} | {color:orange} The patch generated 124 new + 0 unchanged - 0 
fixed = 124 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
24s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . hadoop-client-modules hadoop-client-modules/hadoop-client 
hadoop-client-modules/hadoop-client-api 
hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-minicluster 
hadoop-client-modules/hadoop-client-runtime hadoop-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} patch/hadoop-common-project/hadoop-common no findbugs 
output file (hadoop-common-project/hadoop-common/target/findbugsXml.xml) 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} 
patch/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common
 no findbugs output file 
(hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/target/findbugsXml.xml)
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} 
patch/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle
 no findbugs 

[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13782:
---
Hadoop Flags: Reviewed

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch, HADOOP-13782.006.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649117#comment-15649117
 ] 

Zhe Zhang commented on HADOOP-13782:


Thanks Erik! +1 on v6 patch pending Jenkins.

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch, HADOOP-13782.006.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-13782:
-
Attachment: HADOOP-13782.006.patch

Attaching v006 patch with two minor tweaks to v005:
(1) increased randomness in the test case
(2) reducing the synchronization on {{init}} since that method is called in 
each invokation of {{RpcInvoker.call}}. Now it only requires synchronization if 
a new protocol is added to the cache; this stays more in line with the previous 
behavior of {{MutableRates}}. 

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch, HADOOP-13782.006.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649098#comment-15649098
 ] 

Sangjin Lee commented on HADOOP-11804:
--

Oh got you. I read it too quickly.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649088#comment-15649088
 ] 

Sean Busbey commented on HADOOP-11804:
--

right, the problem isn't the inlining, it's that the key has been rewritten to 
use {{org.apache.hadoop.shaded}} as a prefix, which presumably none of our 
actual configuration files use.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649050#comment-15649050
 ] 

Sangjin Lee commented on HADOOP-11804:
--

The javac compiler inlines string or integer primitive constants.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649040#comment-15649040
 ] 

Sean Busbey commented on HADOOP-11804:
--

{quote}
I tried it with Avro and got NoClassDefFound for Log4J

I think this is expected based on the contents of the hadoop-client-runtime 
pom.xml, which marks log4j as optional.
{quote}

Yes, that's correct. Since Avro uses JobConf it will have to add a dependency 
on log4j. I'll make sure to call that out in docs that provide examples of use. 
If I wanted to try to refactor out the use of log4j in JobConf, do you think 
that'd be feasible in time for 3.0?

{quote}
I decompiled the SerializationFactory class, and noticed that it messed with 
the config key. I think we need to add some kind of exclusion for 
CommonConfigurationKeysPublic.
{code}
// before
if (conf.get(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY).equals("")) {
// decompiled
if (conf.get("org.apache.hadoop.shaded.io.serializations").equals("")) {
{code}
{quote}

Interesting. Let me figure out a test for this and come up with a fix.


> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13802:
---
Attachment: HADOOP-13802.4.patch

The v4 patch fixes the checkstyle warnings. I'll commit this if Jenkins does 
not complain again. Not tests needed.

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch, HADOOP-13802.4.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Attachment: HADOOP-13720.006.patch

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Attachment: (was: HADOOP-13720.006.patch)

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648989#comment-15648989
 ] 

Grant Sohn commented on HADOOP-13802:
-

Code change only reformatting text, does not require a test.

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648956#comment-15648956
 ] 

Hadoop QA commented on HADOOP-13720:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 52 unchanged - 24 fixed = 53 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838051/HADOOP-13720.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 257dd221aeb5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11031/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11031/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11031/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
> 

[jira] [Comment Edited] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648921#comment-15648921
 ] 

Andrew Wang edited comment on HADOOP-11804 at 11/8/16 10:00 PM:


Thanks for the rev Sean. I tried it with Avro and got NoClassDefFound for Log4J:

{noformat}
testSort(org.apache.avro.mapred.TestAvroTextSort)  Time elapsed: 0.051 sec  <<< 
ERROR!
java.lang.NoClassDefFoundError: org/apache/log4j/Level
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:356)
at 
org.apache.avro.mapred.TestAvroTextSort.testSort(TestAvroTextSort.java:37)
{noformat}

I think this is expected based on the contents of the hadoop-client-runtime 
pom.xml, which marks log4j as optional. I manually added this dependency, and 
then hit this:

{noformat}
testReadAvro(org.apache.avro.hadoop.io.TestAvroSequenceFile)  Time elapsed: 
0.016 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.io.serializer.SerializationFactory.(SerializationFactory.java:58)
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1248)
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1207)
at 
org.apache.avro.hadoop.io.AvroSequenceFile$Writer.(AvroSequenceFile.java:532)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.writeSequenceFile(TestAvroSequenceFile.java:200)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.testReadAvro(TestAvroSequenceFile.java:53)
{noformat}

I decompiled the SerializationFactory class, and noticed that it messed with 
the config key. I think we need to add some kind of exclusion for 
CommonConfigurationKeysPublic.

{code}
// before
if (conf.get(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY).equals("")) {
// decompiled
if (conf.get("org.apache.hadoop.shaded.io.serializations").equals("")) {
{code}

Here's my Avro diff for master (without the log4j addition) if you want to try 
this yourself:

https://gist.github.com/anonymous/c064c283348a2d1bbec00845678339f9


was (Author: andrew.wang):
Thanks for the rev Sean. I tried it with Avro and got NoClassDefFound for Log4J:

{noformat}
testSort(org.apache.avro.mapred.TestAvroTextSort)  Time elapsed: 0.051 sec  <<< 
ERROR!
java.lang.NoClassDefFoundError: org/apache/log4j/Level
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:356)
at 
org.apache.avro.mapred.TestAvroTextSort.testSort(TestAvroTextSort.java:37)
{noformat}

I think this is expected based on the contents of the hadoop-client-runtime 
pom.xml, which marks log4j as optional. I manually added this dependency, and 
then hit this:

{noformat}
testReadAvro(org.apache.avro.hadoop.io.TestAvroSequenceFile)  Time elapsed: 
0.016 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.io.serializer.SerializationFactory.(SerializationFactory.java:58)
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1248)
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1207)
at 
org.apache.avro.hadoop.io.AvroSequenceFile$Writer.(AvroSequenceFile.java:532)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.writeSequenceFile(TestAvroSequenceFile.java:200)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.testReadAvro(TestAvroSequenceFile.java:53)
{noformat}

I decompiled the SerializationFactory class, and noticed that it messed with 
the config key. I think we need to add some kind of exclusion for 
CommonConfigurationKeysPublic.

{code}
// before
if (conf.get(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY).equals("")) {
// decompiled
if (conf.get("org.apache.hadoop.shaded.io.serializations").equals("")) {
{noformat}

Here's my Avro diff for master (without the log4j addition) if you want to try 
this yourself:

https://gist.github.com/anonymous/c064c283348a2d1bbec00845678339f9

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, 

[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648921#comment-15648921
 ] 

Andrew Wang commented on HADOOP-11804:
--

Thanks for the rev Sean. I tried it with Avro and got NoClassDefFound for Log4J:

{noformat}
testSort(org.apache.avro.mapred.TestAvroTextSort)  Time elapsed: 0.051 sec  <<< 
ERROR!
java.lang.NoClassDefFoundError: org/apache/log4j/Level
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:356)
at 
org.apache.avro.mapred.TestAvroTextSort.testSort(TestAvroTextSort.java:37)
{noformat}

I think this is expected based on the contents of the hadoop-client-runtime 
pom.xml, which marks log4j as optional. I manually added this dependency, and 
then hit this:

{noformat}
testReadAvro(org.apache.avro.hadoop.io.TestAvroSequenceFile)  Time elapsed: 
0.016 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.io.serializer.SerializationFactory.(SerializationFactory.java:58)
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1248)
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1207)
at 
org.apache.avro.hadoop.io.AvroSequenceFile$Writer.(AvroSequenceFile.java:532)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.writeSequenceFile(TestAvroSequenceFile.java:200)
at 
org.apache.avro.hadoop.io.TestAvroSequenceFile.testReadAvro(TestAvroSequenceFile.java:53)
{noformat}

I decompiled the SerializationFactory class, and noticed that it messed with 
the config key. I think we need to add some kind of exclusion for 
CommonConfigurationKeysPublic.

{code}
// before
if (conf.get(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY).equals("")) {
// decompiled
if (conf.get("org.apache.hadoop.shaded.io.serializations").equals("")) {
{noformat}

Here's my Avro diff for master (without the log4j addition) if you want to try 
this yourself:

https://gist.github.com/anonymous/c064c283348a2d1bbec00845678339f9

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648839#comment-15648839
 ] 

Yongjun Zhang commented on HADOOP-13720:


Thanks [~xiaochen]. Good comments!

I uploaded rev06 to address all of them except the second one.


> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Attachment: HADOOP-13720.006.patch

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Summary: Add more info to the msgs printed in 
AbstractDelegationTokenSecretManager for better supportability  (was: Add more 
info to "token ... is expired" message)

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648791#comment-15648791
 ] 

Andrew Wang commented on HADOOP-13789:
--

+1 pending. Jenkins will take a while since it's running the full test suite, 
but I'll commit when it comes back.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.7.patch

-07

  - rebased to trunk (dbb133c)
  - update to v4 of HADOOP-13789
  - prevent spurious jars
  - make sure dependencies excluded from shading aren't relocated
  - add client jars to dist tarball


> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: Patch Available  (was: In Progress)

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648691#comment-15648691
 ] 

Hadoop QA commented on HADOOP-13802:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 30 unchanged - 4 fixed = 36 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838035/HADOOP-13802.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 455dd2b763c6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11027/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11027/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11027/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> 

[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-13782:
-
Attachment: HADOOP-13782.005.patch

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648628#comment-15648628
 ] 

Erik Krogen commented on HADOOP-13782:
--

Thanks for the comments Zhe. I made {{snapshot}}/{{init}} {{synchronized}} 
methods; there wasn't really a reason they needed to synchronize on 
{{globalMetrics}} rather than just being {{synchronized}} methods. I also 
created {{ThreadSafeSampleStat}}; note that since the collection of the 
{{numSamples}}/{{total}} values and the {{reset}} should be done as one atomic 
step, I decided to create a wrapper class rather than a subclass. 
{{threadLocalMetricsMap}} needs to be a concurrent map since a snapshotting 
thread may be reading it while the local thread is doing a {{put}} operation. 

Good catch on the snapshot behavior not fully clearing; the correct behavior 
was lost between the v001 and v002 patches. I integrated the correct logic into 
{{ThreadSafeSampleStat}}. 

Attaching v005 patch. 

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch, 
> HADOOP-13782.005.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648515#comment-15648515
 ] 

Zhe Zhang commented on HADOOP-13782:


Thanks Erik for the update! With HADOOP-13804 change the new class is much 
cleaner :)

The current concurrency model is still a little complicated. {{snapshot}} has a 
nested synchronization on {{globalMetrics}} and {{stat}}, where {{stat}} is a 
local variable. Maybe we can simplify the concurrency model by:
# Make {{globalMetrics}} a ConcurrentMap
# Do we want to support multiple threads doing {{snapshot}} at the same time? 
If not, we should probably make it a synchronized method so it's easier to 
maintain and reason about
# Maybe creating a concurrent version of {{SampleStat}}, because that's the 
only object we want to protect from concurrent updating (local thread adding, 
and the snapshotting thread resetting).
{code}
  private class ConcurrentSampleStat extends SampleStat {
@Override
public synchronized void reset(){
  super.reset();
}
@Override
public synchronized SampleStat add(double x) {
  return super.add(x);
}
  }
{code}
# {{threadLocalMetricsMap}} can be a regular instead of concurrent map?

Also, IIUC, {{snapshot}} is supposed to clear all metrics from the last window. 
In the v4 patch, if a certain type of metrics appeared in the last window but 
disappears in the current window (e.g. thread dies), the entry in 
{{globalMetrics}} is not cleared.

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648517#comment-15648517
 ] 

Grant Sohn commented on HADOOP-13802:
-

Fixed.

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Updated] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated HADOOP-13802:

Attachment: HADOOP-13802.3.patch

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648384#comment-15648384
 ] 

Hadoop QA commented on HADOOP-13782:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 21 unchanged - 2 fixed = 23 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838017/HADOOP-13782.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cff25740c5bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11025/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11025/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11025/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>

[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message

2016-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648376#comment-15648376
 ] 

Xiao Chen commented on HADOOP-13720:


Thanks [~yzhangal] for the new revs, and Steve for the reviews.

Looks great overall. Some nitty comments:
- Please update the jira title as you proposed. :)
- It requires more operations (create thread local format, and format the the 
time) when exception happens, but I think that's fine.
- I see some exceptions throws with {{(identifier)}} while some throws without 
the {{()}}. Suggest to make them consistent.
- Can we add a unit test for the new {{Time#formatTime}} ? OTOH there is no 
existing tests
- {{long curTime = Time.now();}} naming it {{now}} would be more consistent 
with current code, for example {{renewToken}}.

> Add more info to "token ... is expired" message
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13789:
-
Attachment: HADOOP-13789.4.patch

-4

  - rebase to trunk (026b39a)
 - fixes additional whitespace nit.

locally I can confirm no new findbugs issues in the module that didn't make an 
xml output.

locally I can also confirm the stated failed tests either also fail on trunk 
ref 026b39a (TestContainerManagerSecurity) or they don't fail (the rest).

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch, HADOOP-13789.4.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13782:
---
Target Version/s: 2.7.4

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648268#comment-15648268
 ] 

Xiao Chen commented on HADOOP-13590:


Andrew had a +1 pending earlier, but patch 10 also addressed Steve's comment in 
the test. So I'm not sure: is that +1 pending still valid?

Thank you both again for the reviews.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-13782:
-
Attachment: HADOOP-13782.004.patch

Some extra characters snuck into {{MutableRate.java}} in the v003 patch... 
Attaching v004 patch to remedy.

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch, HADOOP-13782.004.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648180#comment-15648180
 ] 

Hadoop QA commented on HADOOP-13782:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
49s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
24s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 21 unchanged - 3 fixed = 26 total (was 24) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838003/HADOOP-13782.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d99c978c70c 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_111 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/branch-compile-root.txt
 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/patch-compile-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11022/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 

[jira] [Updated] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13687:
---
Attachment: HADOOP-13687-trunk.006.patch

I'm attaching trunk revision 006, once again attempting to fix the version 
numbers.

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch, HADOOP-13687-trunk.004.patch, 
> HADOOP-13687-trunk.005.patch, HADOOP-13687-trunk.006.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648141#comment-15648141
 ] 

Hadoop QA commented on HADOOP-13687:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-tools_hadoop-openstack generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-cloud-storage-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837995/HADOOP-13687-trunk.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Updated] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-13782:
-
Attachment: HADOOP-13782.003.patch

Attaching v003 patch removing some unused imports. 

> Make MutableRates metrics thread-local write, aggregate-on-read
> ---
>
> Key: HADOOP-13782
> URL: https://issues.apache.org/jira/browse/HADOOP-13782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-13782.000.patch, HADOOP-13782.001.patch, 
> HADOOP-13782.002.patch, HADOOP-13782.003.patch
>
>
> Currently the {{MutableRates}} metrics class serializes all writes to metrics 
> it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
> increments of unrelated metrics contained within the same {{MutableRates}} 
> object will serialize w.r.t. each other). This class is used by 
> {{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
> modify these metrics. Instead we should allow updates to unrelated metrics 
> objects to happen concurrently. To do so we can let each thread locally 
> collect metrics, and on a {{snapshot}}, aggregate the metrics from all of the 
> threads. 
> I have collected some benchmark performance numbers in HADOOP-13747 
> (https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
> which indicate that this can bring significantly higher performance in high 
> contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648035#comment-15648035
 ] 

Mingliang Liu commented on HADOOP-13687:


Like the idea and +1 on the v5 patch pending on Jenkins.

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch, HADOOP-13687-trunk.004.patch, 
> HADOOP-13687-trunk.005.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13687:
---
Attachment: HADOOP-13687-trunk.005.patch

I'm uploading trunk revision 005 to correct the version numbers in the pom.xml 
files.

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch, HADOOP-13687-trunk.004.patch, 
> HADOOP-13687-trunk.005.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647943#comment-15647943
 ] 

Steve Loughran commented on HADOOP-13687:
-

jenkins was unhappy because the pom still said 2.9.0-SNAPSHOT

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch, HADOOP-13687-trunk.004.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-11-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647866#comment-15647866
 ] 

Brahma Reddy Battula edited comment on HADOOP-13707 at 11/8/16 3:47 PM:


Pushed to trunk..


[~ste...@apache.org] can we delete master branch..? shalI we discuss in 
mailing-list..?
am I wrong here..?  thanks


was (Author: brahmareddy):
Pushed to trunk..


[~ste...@apache.org] can we delete master branch..? Or shalI we discuss in 
mailing-list..? thanks

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647883#comment-15647883
 ] 

Hudson commented on HADOOP-13707:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10789 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10789/])
HADOOP-13707. If kerberos is enabled while HTTP SPNEGO is not (brahma: rev 
dbb133ccfc00e20622a5dbf7a6e1126fb63d7487)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647872#comment-15647872
 ] 

Hadoop QA commented on HADOOP-13119:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 54 unchanged - 4 fixed = 55 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-common-project/hadoop-common generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Unread field:HttpServer2.java:[line 273] |
|  |  Unread field:HttpServer2.java:[line 159] |
|  |  Unread field:HttpServer2.java:[line 268] |
| Failed junit tests | hadoop.log.TestLogLevel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837984/HADOOP-13119.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6c126bb11ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 026b39a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/testReport/ |
| modules | C: 

[jira] [Commented] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-11-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647866#comment-15647866
 ] 

Brahma Reddy Battula commented on HADOOP-13707:
---

Pushed to trunk..


[~ste...@apache.org] can we delete master branch..? Or shalI we discuss in 
mailing-list..? thanks

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.11+

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647863#comment-15647863
 ] 

Steve Loughran commented on HADOOP-13050:
-

yetus failure due to branch-2 not compiling; assume unrelated. I'll resubmit 
once fixed.

I'm considering if we could wrap the AWS SDK which something that shades 
Jackson, that is — create an intermediate module whose purpose is to isolate 
the AWS SDK jackson and so make using this release in 2.7.x straightforward, 
with branch-2 choosing whatever version of jackson people want. It'd be back to 
being some uber-AWS JAR. Where there's a risk here is if someone wants to use 
an AWS library which we don't bundle (say: kinesis, SNS, ...) and they get 
linked up, or worse: include duplicate dependencies.

I am so looking forward to Java 9, even if I have no idea how we'd get Hadoop 
to work there.

> Upgrade to AWS SDK 10.11+
> -
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2-003.patch, 
> HADOOP-13050-branch-2.002.patch, HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2016-11-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13707:
--
Fix Version/s: 3.0.0-alpha2

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647834#comment-15647834
 ] 

Steve Loughran commented on HADOOP-13345:
-

thanks: sorry for breaking things —it's the price of working against a changing 
part of the codebase. Now I'm involved in this branch as well (a) I'll have 
less time to break things on branch-2 and (b) I'll be more aware of what I've 
just broken.

FWIW I don't think you need to do rebase here, merging is better for 
collaboration, and avoids that hell of having to fix up some patch conflict 
over a class you know gets deleted later. When this gets pulled into 
trunk/branch-2, it'll be done as a squashed merge, so there's no harm in doing 
merges here rather than rebase

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647718#comment-15647718
 ] 

Steve Loughran commented on HADOOP-13651:
-

LGTM

bq. All integration tests pass (except for a couple of unrelated, 
sometimes-flaky tests).

which tests? I'd encourage you to still declare them in the "tests worked" 
report, just so we can track their reliability

h3. S3ABlockOutputStream

{code}
  /** Total bytes for downloads submitted so far. */
  private int bytesSubmitted;
{code}

best to change the comment, make a long as we support (tested) uploads > 4GB. I 
Think {{putObject()}} can still return an int, but safest to make
that a long too.

h3. S3AFileSystem

* good to see you counting errors ignored on MD updates. I was going to suggest 
that, but looked closer and you'd already done it. Nice. You may want to make 
it a separate metric, "metadata update failure", so they can be monitored 
—they're more serious than a recoverable read error on a stream, which is what 
that counter was put in to track.
* as/when I move to parallel renaming, this code is going to break again. I 
don't see an easy workaround, given that patch doesn't exist. FWIW it will 
involve taking the recursive directory listing, sorting by size and then 
submitting to the same thread pool which supported writes, ("slow IO 
operations").

h3. DirListingMetadata

* {{prettyPrint}} s/Authorit/r/Authoritive; break up append with + inside into 
an append chain.

h3. S3Guard

regarding that race condition between listStatus and delete, we always warn 
that it always exists in a DFS: there's no guarantee a file returned in a list 
is there by thte time you come to use it, and if you use one of the remote 
iterator methods, the risk of that or more dramatic things like parent dir 
deletion increase. There's not even any guarantee that a an iterated listing is 
consistent; even in HDFS you could rename a a file "z" to "a" and have 
it never get found in an ongoing list operation.
So: no need ta havesnapshot consistency between list and use, just go for "as 
good as the spec says you need to". 

h3. MetadataStoreTestBase

L285: please use a different filename.



> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch, 
> HADOOP-13651-HADOOP-13345.004.patch, HADOOP-13651-HADOOP-13345.005.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-08 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Status: Patch Available  (was: Reopened)

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2016-11-08 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13119:

Attachment: HADOOP-13119.001.patch

Upload first patch for this issue. Any comment will be welcome.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Attachments: HADOOP-13119.001.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647556#comment-15647556
 ] 

Sean Busbey edited comment on HADOOP-13789 at 11/8/16 1:31 PM:
---

{quote}
-1  findbugs0m 30s  
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
{quote}

Not sure what's up with this warning. I don't see anything in the output to 
indicate findbugs failed. (and nothing in the pom change there should have 
caused a difference in the findings)


was (Author: busbey):
{quote}
-1  findbugs0m 30s  
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
{quote}

Not sure what's up with this warning. I don't see anything in the output to 
indicate findbugs failed.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13789) Hadoop Common includes generated test protos in both jar and test-jar

2016-11-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647556#comment-15647556
 ] 

Sean Busbey commented on HADOOP-13789:
--

{quote}
-1  findbugs0m 30s  
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
{quote}

Not sure what's up with this warning. I don't see anything in the output to 
indicate findbugs failed.

> Hadoop Common includes generated test protos in both jar and test-jar
> -
>
> Key: HADOOP-13789
> URL: https://issues.apache.org/jira/browse/HADOOP-13789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13789.1.patch, HADOOP-13789.2.patch, 
> HADOOP-13789.3.patch
>
>
> Right now our ProtocMojo always adds source directories to the main compile 
> phase and we use it in hadoop-common to both generate main files as well as 
> test files. This results in the test files getting added to both our test jar 
> (correct) and our main jar (not correct).
> We should either add a main-vs-test flag to the configuration for ProtocMojo 
> or make a ProtocTestMojo that always adds as a test sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.11+

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647326#comment-15647326
 ] 

Hadoop QA commented on HADOOP-13050:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  3m 
10s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
35s{color} | {color:red} root in branch-2 failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
44s{color} | {color:red} root in branch-2 failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
30s{color} | {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 30s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
44s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 44s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  

[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647270#comment-15647270
 ] 

Steve Loughran commented on HADOOP-12718:
-

Looking at this patch. I think I'd prefer it if the tests (a) looked for a 
constant string declared in {{FSExceptionMessages}}, rather than "permission 
denied". Tests look for strings are always so, so brittle. And we should be 
looking at a consistent error message for all those documentation and support 
call issues.
I think the asserts could also include the full text

{code}
private void assertPermissionDenied(String text){
  assertTrue(text + " does not contain " + FSExceptionMessages.NO_PERMISSION, 
text.contains(FSExceptionMessages.NO_PERMISSION)
}



One thing to consider —and I don't think there's a right or wrong here— is 
whether to make the exception an {{java.nio.file.AccessDeniedException}}, or a 
{{org.apache.hadoop.fs.PathPermissionException}}. Both of these separate out 
the path for analysis later.

I've just checked S3AFileSystem; there we through the 
{{AccessDeniedException}}; I don't know if that's good —or should it switch to 
{{org.apache.hadoop.fs.PathPermissionException}}.

That's really a separate issue; I'd go through the blobstores and make them 
consistent if we ever chose one exception & message.


> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647252#comment-15647252
 ] 

Akira Ajisaka commented on HADOOP-13802:


Agreed.

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D 

[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.11+

2016-11-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13050:

Status: Patch Available  (was: Open)

> Upgrade to AWS SDK 10.11+
> -
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2-003.patch, 
> HADOOP-13050-branch-2.002.patch, HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647102#comment-15647102
 ] 

Hadoop QA commented on HADOOP-13651:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
39s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 2 new + 12 unchanged - 
0 fixed = 14 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-tools/hadoop-aws generated 3 new + 0 unchanged 
- 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Dead store to date in 
org.apache.hadoop.fs.s3a.S3AUtils.createUploadFileStatus(Path, boolean, long, 
long, String)  At 
S3AUtils.java:org.apache.hadoop.fs.s3a.S3AUtils.createUploadFileStatus(Path, 
boolean, long, long, String)  At S3AUtils.java:[line 271] |
|  |  Load of known null value in 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.get(Path)  At 
LocalMetadataStore.java:in 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.get(Path)  At 
LocalMetadataStore.java:[line 130] |
|  |  Load of known null value in 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.listChildren(Path)  At 
LocalMetadataStore.java:in 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.listChildren(Path)  At 
LocalMetadataStore.java:[line 140] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 

[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-08 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15646989#comment-15646989
 ] 

Vishwajeet Dusane commented on HADOOP-13037:


[~fabbri] - For contract test related changes and comments from 
[~ste...@apache.org] and [~cnauroth] captured in HADOOP-13257. I will be 
raising a separate patch for contract test related optimization and correction 
as 
[proposed|https://issues.apache.org/jira/browse/HADOOP-13037?focusedCommentId=15627882=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15627882]
 by [~chris.douglas] under HADOOP-13257 task once HADOOP-13037 is committed.

HADOOP-13257 would cover 
{{org.junit.Assume.assumeTrue(AdlStorageConfiguration.isContractTestEnabled());}}
 like usage optimizations.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch, HADOOP-13037-004.patch, 
> HADOOP-13037.005.patch, HADOOP-13037.006.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-11-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15646969#comment-15646969
 ] 

Aaron Fabbri commented on HADOOP-13037:
---

Thanks for the ping [~chris.douglas].  Glad to see the followup on separating 
this from WebHDFS.

Did you enable the FS contract tests and run those?  Any issues?  

I notice this code has a separate way to disable the tests here:
{quote}
 org.junit.Assume
.assumeTrue(AdlStorageConfiguration.isContractTestEnabled());
{quote}



> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch, HADOOP-13037-004.patch, 
> HADOOP-13037.005.patch, HADOOP-13037.006.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-08 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13651:
--
Status: Patch Available  (was: Open)

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch, 
> HADOOP-13651-HADOOP-13345.004.patch, HADOOP-13651-HADOOP-13345.005.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-08 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13651:
--
Status: Open  (was: Patch Available)

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch, 
> HADOOP-13651-HADOOP-13345.004.patch, HADOOP-13651-HADOOP-13345.005.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-08 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13651:
--
Attachment: HADOOP-13651-HADOOP-13345.005.patch

Attaching v5 patch.  Changes from v4:

- Rebase on latest trunk.
  - [~steve_l], please review new changes to S3ABlockOutputStream.  Let me know 
if you want to account for file size some other way.
  - S3AFileStatus constructor changed.

- Minor checkstyle / javadoc cleanup.

All unit tests passed for me.  All integration tests pass (except for a couple 
of unrelated, sometimes-flaky tests).

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch, 
> HADOOP-13651-HADOOP-13345.004.patch, HADOOP-13651-HADOOP-13345.005.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org