[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-10-03 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190844#comment-16190844
 ] 

Akira Ajisaka commented on HADOOP-13835:


LGTM, +1. Thanks [~vvasudev].

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-10-03 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190836#comment-16190836
 ] 

Varun Vasudev commented on HADOOP-13835:


[~ajisakaa] - any chance you can take a look at this today? I'd like to get 
this into the 2.9.0 release. Thanks!

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190815#comment-16190815
 ] 

Ajay Kumar edited comment on HADOOP-14771 at 10/4/17 4:56 AM:
--

[~busbey],[~haibochen],[~andrew.wang],[~ste...@apache.org] thanks for your 
comments and review. Wanted to check if there is any suggestion for the latest 
patch. I have tested it locally. 


was (Author: ajayydv):
[~busbey],[~haibochen],[~andrew.wang],[~ste...@apache.org] any suggestion for 
the latest patch. I have tested it locally. 

> hadoop-client does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Haibo Chen
>Assignee: Ajay Kumar
>Priority: Critical
> Attachments: HADOOP-14771.01.patch, HADOOP-14771.02.patch, 
> HADOOP-14771.03.patch, HADOOP-14771.04.patch
>
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190815#comment-16190815
 ] 

Ajay Kumar commented on HADOOP-14771:
-

[~busbey],[~haibochen],[~andrew.wang],[~ste...@apache.org] any suggestion for 
the latest patch. I have tested it locally. 

> hadoop-client does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Haibo Chen
>Assignee: Ajay Kumar
>Priority: Critical
> Attachments: HADOOP-14771.01.patch, HADOOP-14771.02.patch, 
> HADOOP-14771.03.patch, HADOOP-14771.04.patch
>
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190717#comment-16190717
 ] 

Hadoop QA commented on HADOOP-9747:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 117 unchanged - 25 fixed = 126 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-9747 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861100/HADOOP-9747.2.trunk.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0e7354329303 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b34b3ff |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13450/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13450/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13450/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce unnecessary UGI 

[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash and linkFallback for ViewFileSystem

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190716#comment-16190716
 ] 

Hadoop QA commented on HADOOP-13055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} root: The patch generated 0 new + 190 unchanged - 15 
fixed = 190 total (was 205) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890262/HADOOP-13055.09.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ec679c0d0815 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 79e37dc |
| Default Java | 1.8.0_144 |
| findbugs | 

[jira] [Comment Edited] (HADOOP-14910) Upgrade netty-all jar to latest 4.0.x.Final

2017-10-03 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190590#comment-16190590
 ] 

Bharat Viswanadham edited comment on HADOOP-14910 at 10/4/17 12:45 AM:
---

Hi [~vinayrpet] 
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);





{code:java}
In ParameterParser.java Line 147-148:
String cf = decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);
{code}


Can we use the QueryStringDecoder here too and remove the decodeComponent 
method, which is added to resolve this issue only?


was (Author: bharatviswa):
Hi [~vinayrpet] 
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);


{noformat}
In ParameterParser.java Line 147-148:
{noformat}

{code:java}
String cf = decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);
{code}


Can we use the QueryStringDecoder here too and remove the decodeComponent 
method, which is added to resolve this issue only?

> Upgrade netty-all jar to latest 4.0.x.Final
> ---
>
> Key: HADOOP-14910
> URL: https://issues.apache.org/jira/browse/HADOOP-14910
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HADOOP-14910-01.patch, HADOOP-14910-02.patch
>
>
> Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
> reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14910) Upgrade netty-all jar to latest 4.0.x.Final

2017-10-03 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190590#comment-16190590
 ] 

Bharat Viswanadham edited comment on HADOOP-14910 at 10/4/17 12:44 AM:
---

Hi [~vinayrpet] 
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);


{noformat}
In ParameterParser.java Line 147-148:
{noformat}

{code:java}
String cf = decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);
{code}


Can we use the QueryStringDecoder here too and remove the decodeComponent 
method, which is added to resolve this issue only?


was (Author: bharatviswa):
Hi [~vinayrpet] 
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);

Can we use the QueryStringDecoder here too and remove the decodeComponent 
method, which is added to resolve this issue only. 

> Upgrade netty-all jar to latest 4.0.x.Final
> ---
>
> Key: HADOOP-14910
> URL: https://issues.apache.org/jira/browse/HADOOP-14910
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HADOOP-14910-01.patch, HADOOP-14910-02.patch
>
>
> Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
> reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14910) Upgrade netty-all jar to latest 4.0.x.Final

2017-10-03 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190590#comment-16190590
 ] 

Bharat Viswanadham commented on HADOOP-14910:
-

Hi [~vinayrpet] 
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);

Can we use the QueryStringDecoder here too and remove the decodeComponent 
method, which is added to resolve this issue only. 

> Upgrade netty-all jar to latest 4.0.x.Final
> ---
>
> Key: HADOOP-14910
> URL: https://issues.apache.org/jira/browse/HADOOP-14910
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HADOOP-14910-01.patch, HADOOP-14910-02.patch
>
>
> Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
> reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190579#comment-16190579
 ] 

Hadoop QA commented on HADOOP-14920:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 70 unchanged - 7 fixed = 72 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890259/HADOOP-14920.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 57f84019e114 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 79e37dc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13448/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13448/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13448/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console 

[jira] [Commented] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits

2017-10-03 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190577#comment-16190577
 ] 

Aaron Fabbri commented on HADOOP-13974:
---

I'll try to throw something together this week.

> S3a CLI to support list/purge of pending multipart commits
> --
>
> Key: HADOOP-13974
> URL: https://issues.apache.org/jira/browse/HADOOP-13974
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
>
> The S3A CLI will need to be able to list and delete pending multipart 
> commits. 
> We can do the cleanup already via fs.s3a properties. The CLI will let scripts 
> stat for outstanding data (have a different exit code) and permit batch jobs 
> to explicitly trigger cleanups.
> This will become critical with the multipart committer, as there's a 
> significantly higher likelihood of commits remaining outstanding.
> We may also want to be able to enumerate/cancel all pending commits in the FS 
> tree



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits

2017-10-03 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-13974:
-

Assignee: Aaron Fabbri

> S3a CLI to support list/purge of pending multipart commits
> --
>
> Key: HADOOP-13974
> URL: https://issues.apache.org/jira/browse/HADOOP-13974
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
>
> The S3A CLI will need to be able to list and delete pending multipart 
> commits. 
> We can do the cleanup already via fs.s3a properties. The CLI will let scripts 
> stat for outstanding data (have a different exit code) and permit batch jobs 
> to explicitly trigger cleanups.
> This will become critical with the multipart committer, as there's a 
> significantly higher likelihood of commits remaining outstanding.
> We may also want to be able to enumerate/cancel all pending commits in the FS 
> tree



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-10-03 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13055:

Attachment: HADOOP-13055.09.patch

Thanks for the review comments [~eddyxu]. Attached v09 patch to address the 
following comments.

bq. Should we consider move the "fallback link" from INodeDir}}to {{InodeTree, 
as
Done. But, isRoot() is still needed for other places.

bq. Could you rephrase the doc about the concept of internalDir.
Done. 

bq. Is INodeDir#resolve() called? Can we remove it?
Done.

bq. Please add comments for {{static class LinkEntry }}
Done. 

bq. for INodeDir#getChildren you might want to return a unmodified map.
Done.

bq. It'd be nice to raise a user readable error if mutliple MERGE_SLASH or 
SINGLE_FALLBACK were configured.
Maybe check mergeSlashTarget == null ? if (linkType != LinkType.MERGE_SLASH) {
Done.

can you please take a look at the latest patch?


> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, 
> HADOOP-13055.08.patch, HADOOP-13055.09.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash and linkFallback for ViewFileSystem

2017-10-03 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13055:

Summary: Implement linkMergeSlash and linkFallback for ViewFileSystem  
(was: Implement linkMergeSlash for ViewFileSystem)

> Implement linkMergeSlash and linkFallback for ViewFileSystem
> 
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, 
> HADOOP-13055.08.patch, HADOOP-13055.09.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-14920:

Attachment: HADOOP-14920.002.patch

Update the patch to add unit test for the change. The checkstyle warning from 
previous Jenkins run was not caused by this change. The existing checkstyle 
issue with switch/case ident in DelegationTokenAuthenticationHandler.java can 
be fixed in separate JIRA to reduce unrelated changes.

> KMSClientProvider won't work with KMS delegation token retrieved from 
> non-Java client.
> --
>
> Key: HADOOP-14920
> URL: https://issues.apache.org/jira/browse/HADOOP-14920
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-14920.001.patch, HADOOP-14920.002.patch
>
>
> HADOOP-13381 added support to use KMS delegation token to connect to KMS 
> server for key operations. However, the logic to check if the UGI container 
> KMS delegation token assumes that the token must contain a service attribute. 
> Otherwise, a KMS delegation token won't be recognized.
> For delegation token obtained via non-java client such curl (http), the 
> default DelegationTokenAuthenticationHandler only support *renewer* parameter 
> and assume the client itself will add the service attribute. This makes a 
> java client with KMSClientProvdier can't use for KMS delegation token 
> retrieved form non-java client because the token does not contain a service 
> attribute. 
> I did some investigation on this and found two solutions:
> 1. Similar use case exists for webhdfs, and webhdfs supports it with a 
> ["service" 
> parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token].
> We can do this similarly by allowing client to specify a service attribute in 
> the request URL and included in the token returned like webhdfs. Even though 
> this will change in DelegationTokenAuthenticationHandler and may affect many 
> other web component,  this seems to be a clean and low risk solution because 
> it will be an optional parameter. Also, other components get non-java client 
> interop support for free if they have the similar use case. 
> 2. The other way to solve this is to release the token check in 
> KMSClientProvider to check only the token kind instead of the service.  This 
> is an easy work around but seems less optimal to me. 
> cc: [~xiaochen] for additional input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2017-10-03 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190506#comment-16190506
 ] 

Aaron Fabbri commented on HADOOP-14927:
---

[~ste...@apache.org] looking at the code for {{Destroy#run(..)}} in 
S3GuardTool.java, it seems like the FNFE is caught and suppressed, but the test 
is expecting an exception to be thrown.   Should we just change the test to 
*not* expect an exception? 

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: s3
>Affects Versions: 3.0.0-alpha3
>Reporter: Aaron Fabbri
>Priority: Minor
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2017-10-03 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14927:
-

 Summary: ITestS3GuardTool failures in testDestroyNoBucket()
 Key: HADOOP-14927
 URL: https://issues.apache.org/jira/browse/HADOOP-14927
 Project: Hadoop Common
  Issue Type: Bug
  Components: s3
Affects Versions: 3.0.0-alpha3
Reporter: Aaron Fabbri
Priority: Minor


Hit this when testing for the Hadoop 3.0.0-beta1 RC0.

{noformat}
hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
-Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
...
Failed tests: 
  ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
Expected an exception, got 0
  ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
Expected an exception, got 0
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14926) Reconsider the default value of RPC timeout and document it

2017-10-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190456#comment-16190456
 ] 

Jason Lowe commented on HADOOP-14926:
-

I do not see how zero would be a valid timeout.  The RPC call would immediately 
timeout before it has even reached the remote server, preventing any RPC call 
from succeeding.

Many systems implement zero timeout as an indefinite wait when zero makes no 
sense.  Besides Java's socket libs, the {{wait}} builtin does this as well, 
since it doesn't make sense to wait if there's no time that will elapse -- 
simply don't call {{wait}} if you do not want to wait.  Similarly, it makes no 
sense to try to complete an RPC call in zero time.

I don't see how we can change the interpretation of zero without breaking 
compatibility, and I also don't see how a literally zero-wait timeout is useful 
to configure in practice since every RPC call will fail.  If we want to add -1 
as yet another way to specify an infinite wait that's fine with me, but 
changing the interpretation of zero will be problematic for dubious benefit.


> Reconsider the default value of RPC timeout and document it
> ---
>
> Key: HADOOP-14926
> URL: https://issues.apache.org/jira/browse/HADOOP-14926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Yufei Gu
>
> We use zero as the default value of RPC timeout, which mean we don't enforce 
> any timeout aka infinity timeout. I think that *zero means infinite* is 
> counter-intuitive and error-prone through some Java libs(e.g. Socket 
> #setSoTimeout()) do that as well. Zero could be considered as a valid timeout 
> value, while negative one isn't. If we use zero to represent infinite, which 
> number could be used to represent zero timeout? I suggest use -1 as the 
> default value to indicate infinite. 
> We also need to document the default value and it means infinite timeout. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14914) Change to a safely casting long to int.

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190333#comment-16190333
 ] 

Ajay Kumar commented on HADOOP-14914:
-

[~yufeigu], thanks for reviewing the patch. I am checking test failures.

> Change to a safely casting long to int. 
> 
>
> Key: HADOOP-14914
> URL: https://issues.apache.org/jira/browse/HADOOP-14914
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Ajay Kumar
> Attachments: HADOOP-14914.001.patch
>
>
> There are bunches of casting long to int like this:
> {code}
> long l = 123
> int i = (int) l;
> {code}
> This is not a safe cast. if l is greater than Integer.MAX_VALUE, i would be 
> negative, which is an unexpected behavior. We probably at least want to throw 
> an exception in that case. I suggest to use {{Math.toIntExact(longValue)}} to 
> replace them, which throws an exception if the value overflows an int. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14926) Reconsider the default value of RPC timeout and document it

2017-10-03 Thread Yufei Gu (JIRA)
Yufei Gu created HADOOP-14926:
-

 Summary: Reconsider the default value of RPC timeout and document 
it
 Key: HADOOP-14926
 URL: https://issues.apache.org/jira/browse/HADOOP-14926
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Yufei Gu


We use zero as the default value of RPC timeout, which mean we don't enforce 
any timeout aka infinity timeout. I think that *zero means infinite* is 
counter-intuitive and error-prone through some Java libs(e.g. Socket 
#setSoTimeout()) do that as well. Zero could be considered as a valid timeout 
value, while negative one isn't. If we use zero to represent infinite, which 
number could be used to represent zero timeout? I suggest use -1 as the default 
value to indicate infinite. 
We also need to document the default value and it means infinite timeout. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2017-10-03 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190318#comment-16190318
 ] 

Yufei Gu commented on HADOOP-12672:
---

You are right. It's weird that IntelliJ IDEA didn't tell me that somehow even I 
restart it. Anyway , sorry for the confusing. Thanks for pointing out. 

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch, HADOOP-12672.005.patch, 
> HADOOP-12672.006.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14921) Conflicts when starting daemons with the same name

2017-10-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190227#comment-16190227
 ] 

Allen Wittenauer commented on HADOOP-14921:
---

hdfs dfsrouter 

This is consistent with hdfs dfsadmin, etc.

> Conflicts when starting daemons with the same name
> --
>
> Key: HADOOP-14921
> URL: https://issues.apache.org/jira/browse/HADOOP-14921
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>
> HDFS-10467 is adding a {{Router}} while YARN already added one for YARN-2915. 
> Both of them are ultimately started from {{hadoop-functions.sh}}. For the PID 
> file both of them end up using {{/tmp/hadoop-hadoop-router.pid}}. I propose 
> to use the command name also for the PID file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.

2017-10-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190180#comment-16190180
 ] 

Xiaoyu Yao commented on HADOOP-14920:
-

Thanks [~xiaochen] for the quick responses.

bq. httpfs highly shares webhdfs code inside NN, and highly shares 
authentication code with KMS. Would adding a service param here mean httpfs 
would be double-service-configured?
webhdfs has its own delegation token handling before the refactoring work, so 
the change in DelegationTokenAuthenticationHandler/DelegationTokenAuthenticator 
is independent of webhdfs.

For services that use 
DelegationTokenAuthenticationHandler/DelegationTokenAuthenticator like KMS:
*Java client always sets the service attribute for the returned token. No need 
to add service parameter in the token request URL.
*Non-Java client usually can't set the service attribute for the returned token 
easily.  They can do that by adding service parameter to the token request URL 
like what webhdfs supports with this patch.

In summary, we can only have one service attribute in the token. So double 
service configuration won't be an issue here.

bq. if the service could be set arbitrarily, it would be good to find a way to 
make it easier to debug...
In the patch attached, I've added the server side trace of all the parameters 
in DelegationTokenManager#createToken(). On the client side, 
KMSClientProvider#getActualUgi already dump all the UGI along with the all the 
token infos such as kind, service, etc.

bq. Delegation token related ops are not even in the kms docs, would be great 
to add those too. Can be a separate jira.
I've found the same issue and file a ticket HADOOP-12521 for myself. I will 
spend some time to finish the doc after HADOOP-14920.


> KMSClientProvider won't work with KMS delegation token retrieved from 
> non-Java client.
> --
>
> Key: HADOOP-14920
> URL: https://issues.apache.org/jira/browse/HADOOP-14920
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-14920.001.patch
>
>
> HADOOP-13381 added support to use KMS delegation token to connect to KMS 
> server for key operations. However, the logic to check if the UGI container 
> KMS delegation token assumes that the token must contain a service attribute. 
> Otherwise, a KMS delegation token won't be recognized.
> For delegation token obtained via non-java client such curl (http), the 
> default DelegationTokenAuthenticationHandler only support *renewer* parameter 
> and assume the client itself will add the service attribute. This makes a 
> java client with KMSClientProvdier can't use for KMS delegation token 
> retrieved form non-java client because the token does not contain a service 
> attribute. 
> I did some investigation on this and found two solutions:
> 1. Similar use case exists for webhdfs, and webhdfs supports it with a 
> ["service" 
> parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token].
> We can do this similarly by allowing client to specify a service attribute in 
> the request URL and included in the token returned like webhdfs. Even though 
> this will change in DelegationTokenAuthenticationHandler and may affect many 
> other web component,  this seems to be a clean and low risk solution because 
> it will be an optional parameter. Also, other components get non-java client 
> interop support for free if they have the similar use case. 
> 2. The other way to solve this is to release the token check in 
> KMSClientProvider to check only the token kind instead of the service.  This 
> is an easy work around but seems less optimal to me. 
> cc: [~xiaochen] for additional input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14220.
-
   Resolution: Fixed
Fix Version/s: 2.9.0

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14925) hadoop-aliyun has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14925:
---

 Summary: hadoop-aliyun has missing dependencies
 Key: HADOOP-14925
 URL: https://issues.apache.org/jira/browse/HADOOP-14925
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14924) hadoop-azure-datalake has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14924:
---

 Summary: hadoop-azure-datalake has missing dependencies
 Key: HADOOP-14924
 URL: https://issues.apache.org/jira/browse/HADOOP-14924
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14923) hadoop-azure has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14923:
---

 Summary: hadoop-azure has missing dependencies
 Key: HADOOP-14923
 URL: https://issues.apache.org/jira/browse/HADOOP-14923
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-azure has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input

2017-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190105#comment-16190105
 ] 

Hudson commented on HADOOP-14908:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13013 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13013/])
HADOOP-14908. CrossOriginFilter should trigger regex on more input (aw: rev 
4d5dd75b607d25adf8b41f7408713dfcea8f5330)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/http/TestCrossOriginFilter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/CrossOriginFilter.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md


> CrossOriginFilter should trigger regex on more input
> 
>
> Key: HADOOP-14908
> URL: https://issues.apache.org/jira/browse/HADOOP-14908
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Johannes Alberti
> Fix For: 3.1.0
>
> Attachments: HADOOP-14908-PR279.patch
>
>
> Currently,  CrossOriginFilter.java limits regex matching only if there is an 
> asterisk (\*) in the config.
> {code}
> if (allowedOrigin.contains("*")) {
> {code}
> This means that entries such as:
> {code}
> http?://foo.example.com
> https://[a-z][0-9].example.com
> {code}
> ... and other patterns that succinctly limit the input space need to either 
> be fully expanded or dramatically have their space increased by using an 
> asterisk in order to pass through the filter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2017-10-03 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13091:
--
Target Version/s: 2.10.0  (was: 2.9.0)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-10-03 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-14176:
--
Target Version/s: 2.10.0  (was: 2.9.0)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-10-03 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-10738:
--
Target Version/s: 2.10.0  (was: 2.9.0)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14921) Conflicts when starting daemons with the same name

2017-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190086#comment-16190086
 ] 

Íñigo Goiri commented on HADOOP-14921:
--

That would be for the command line?
It might be a little redundant to start as {{hdfs hdfs-router}}.
I would need to change the class name anyway.
I added more details in HDFS-12577.

> Conflicts when starting daemons with the same name
> --
>
> Key: HADOOP-14921
> URL: https://issues.apache.org/jira/browse/HADOOP-14921
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>
> HDFS-10467 is adding a {{Router}} while YARN already added one for YARN-2915. 
> Both of them are ultimately started from {{hadoop-functions.sh}}. For the PID 
> file both of them end up using {{/tmp/hadoop-hadoop-router.pid}}. I propose 
> to use the command name also for the PID file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14908) CrossOriginFilter should trigger regex on more input

2017-10-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14908:
--
   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

> CrossOriginFilter should trigger regex on more input
> 
>
> Key: HADOOP-14908
> URL: https://issues.apache.org/jira/browse/HADOOP-14908
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Johannes Alberti
> Fix For: 3.1.0
>
> Attachments: HADOOP-14908-PR279.patch
>
>
> Currently,  CrossOriginFilter.java limits regex matching only if there is an 
> asterisk (\*) in the config.
> {code}
> if (allowedOrigin.contains("*")) {
> {code}
> This means that entries such as:
> {code}
> http?://foo.example.com
> https://[a-z][0-9].example.com
> {code}
> ... and other patterns that succinctly limit the input space need to either 
> be fully expanded or dramatically have their space increased by using an 
> asterisk in order to pass through the filter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.

2017-10-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190055#comment-16190055
 ] 

Xiao Chen commented on HADOOP-14920:


Thanks [~xyao] for looking and explaining. Looking at the webhdfs example, I 
don't see a reason we should not do it. Also agree having service check would 
still be good to have.

Some questions/comments, (some Qs I'm not clear, will try to figure out time 
permits...):
- httpfs highly shares webhdfs code inside NN, and highly shares authentication 
code with KMS. Would adding a service param here mean httpfs would be 
double-service-configured?
- if the service could be set arbitrarily, it would be good to find a way to 
make it easier to debug...
- Delegation token related ops are not even in the [kms 
docs|http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-kms/index.html#KMS_HTTP_REST_API],
 would be great to add those too. Can be a separate jira.

> KMSClientProvider won't work with KMS delegation token retrieved from 
> non-Java client.
> --
>
> Key: HADOOP-14920
> URL: https://issues.apache.org/jira/browse/HADOOP-14920
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-14920.001.patch
>
>
> HADOOP-13381 added support to use KMS delegation token to connect to KMS 
> server for key operations. However, the logic to check if the UGI container 
> KMS delegation token assumes that the token must contain a service attribute. 
> Otherwise, a KMS delegation token won't be recognized.
> For delegation token obtained via non-java client such curl (http), the 
> default DelegationTokenAuthenticationHandler only support *renewer* parameter 
> and assume the client itself will add the service attribute. This makes a 
> java client with KMSClientProvdier can't use for KMS delegation token 
> retrieved form non-java client because the token does not contain a service 
> attribute. 
> I did some investigation on this and found two solutions:
> 1. Similar use case exists for webhdfs, and webhdfs supports it with a 
> ["service" 
> parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token].
> We can do this similarly by allowing client to specify a service attribute in 
> the request URL and included in the token returned like webhdfs. Even though 
> this will change in DelegationTokenAuthenticationHandler and may affect many 
> other web component,  this seems to be a clean and low risk solution because 
> it will be an optional parameter. Also, other components get non-java client 
> interop support for free if they have the similar use case. 
> 2. The other way to solve this is to release the token check in 
> KMSClientProvider to check only the token kind instead of the service.  This 
> is an easy work around but seems less optimal to me. 
> cc: [~xiaochen] for additional input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input

2017-10-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190051#comment-16190051
 ] 

Allen Wittenauer commented on HADOOP-14908:
---

+1 committing this to trunk

> CrossOriginFilter should trigger regex on more input
> 
>
> Key: HADOOP-14908
> URL: https://issues.apache.org/jira/browse/HADOOP-14908
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Johannes Alberti
> Attachments: HADOOP-14908-PR279.patch
>
>
> Currently,  CrossOriginFilter.java limits regex matching only if there is an 
> asterisk (\*) in the config.
> {code}
> if (allowedOrigin.contains("*")) {
> {code}
> This means that entries such as:
> {code}
> http?://foo.example.com
> https://[a-z][0-9].example.com
> {code}
> ... and other patterns that succinctly limit the input space need to either 
> be fully expanded or dramatically have their space increased by using an 
> asterisk in order to pass through the filter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189986#comment-16189986
 ] 

Hadoop QA commented on HADOOP-14459:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 15 unchanged - 2 fixed = 17 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890174/HADOOP-14459_6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2aa270049ca3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 453d48b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13447/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13447/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13447/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SerializationFactory shouldn't 

[jira] [Commented] (HADOOP-9902) Shell script rewrite

2017-10-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189907#comment-16189907
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

HADOOP_LOG_DIR hasn't been used exclusively for log4j in a very, very long 
time.  What you've found is an edge condition caused by a bug in HADOOP-9253, 
where the ulimit data wasn't getting captured or even produced to the screen if 
daemons are being run interactively.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> hadoop-9902-1.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
> HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
> HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
> HADOOP-9902.patch, HADOOP-9902.txt, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Nandor Kollar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189882#comment-16189882
 ] 

Nandor Kollar commented on HADOOP-14459:


Ok, can I execute it from Maven?

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, 
> HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189876#comment-16189876
 ] 

Daniel Templeton commented on HADOOP-14459:
---

Yes, there is.  It runs when you press the Submit Patch button.  My bad.  I 
should have noticed that there wasn't a Jenkins report.  I pressed the button 
for you.  Expect a report in a few hours.

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, 
> HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14459:
--
Status: Patch Available  (was: Open)

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, 
> HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Nandor Kollar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189869#comment-16189869
 ] 

Nandor Kollar commented on HADOOP-14459:


[~templedf] done, attached patch. Is there any style checker which failed 
before committing? Asking this so next time I should also execute it before 
attaching a patch.

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, 
> HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Nandor Kollar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HADOOP-14459:
---
Attachment: HADOOP-14459_6.patch

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459_6.patch, 
> HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189843#comment-16189843
 ] 

Daniel Templeton commented on HADOOP-14459:
---

Oops, wait.  In my last pass before committing, I noticed that there's now an 
extra space after the colon in the for loop.  Since I'm gonna make you post a 
new patch for that, you should also change {{X}} to {code}{@code 
X}{code} in the javadocs.

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189783#comment-16189783
 ] 

Steve Loughran commented on HADOOP-14220:
-

Even though this is a javadoc warning, the code is unchanged from trunk. Looks 
like javadoc on java 8 allows javadocs to talk about exceptions which aren't 
explicitly declared as thrown, *provided a superclass is*

This is the code in question
{code}
  /*
   * @throws Exception on any failure
   * @throws ExitUtil.ExitException for an alternative clean exit
   */
  public abstract int run(String[] args, PrintStream out) throws Exception;
{code}

Tuning the javadocs for java7 would only increase the difference between the 
two patches, so I'm going to commit as is.



> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Status: Open  (was: Patch Available)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189767#comment-16189767
 ] 

Steve Loughran commented on HADOOP-14220:
-

{code}
[WARNING] 
/testptch/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:326:
 warning - Tag @link: reference not found: ExitUtil.ExitException
{code}

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189723#comment-16189723
 ] 

Daniel Templeton commented on HADOOP-14459:
---

LGTM +1

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Nandor Kollar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189718#comment-16189718
 ] 

Nandor Kollar commented on HADOOP-14459:


[~templedf] uploaded patch with fixed spacing.

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-10-03 Thread Nandor Kollar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HADOOP-14459:
---
Attachment: HADOOP-14459_5.patch

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459_5.patch, HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189687#comment-16189687
 ] 

Hadoop QA commented on HADOOP-14845:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-tools_hadoop-azure generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14845 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890147/HADOOP-14845.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c644c18a3ca7 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 453d48b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13446/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13446/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13446/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure wasb: getFileStatus not making any auth checks
> 

[jira] [Updated] (HADOOP-14913) Sticky bit implementation for Rename operation in Azure fs

2017-10-03 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14913:
--
Attachment: HADOOP-14193.001.patch

Tested hadoop-azure module against Azure-South India  end point in both secure 
and un-secure mode.

> Sticky bit implementation for Rename operation in Azure fs
> --
>
> Key: HADOOP-14913
> URL: https://issues.apache.org/jira/browse/HADOOP-14913
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: azure, fs, secure
> Attachments: HADOOP-14193.001.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete/rename another user's file because the parent has WRITE permission for 
> all users.
> The purpose of this jira is to implement sticky bit equivalent for 'rename' 
> call when authorization is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks

2017-10-03 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14845:
--
Attachment: HADOOP-14845.004.patch

> Azure wasb: getFileStatus not making any auth checks
> 
>
> Key: HADOOP-14845
> URL: https://issues.apache.org/jira/browse/HADOOP-14845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0
>
> Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, 
> HADOOP-14845.003.patch, HADOOP-14845.004.patch, 
> HADOOP-14845-branch-2-001.patch.txt, HADOOP-14845-branch-2-002.patch, 
> HADOOP-14845-branch-2-003.patch
>
>
> The HDFS spec requires only traverse checks for any file accessed via 
> getFileStatus ... and since WASB does not support traverse checks, removing 
> this call effectively removed all protections for the getFileStatus call. The 
> reasoning at that time was that doing a performAuthCheck was the wrong thing 
> to do, since it was going against the specand that the correct fix to the 
> getFileStatus issue was to implement traverse checks rather than go against 
> the spec by calling performAuthCheck. The side-effects of such a change were 
> not fully clear at that time, but the thinking was that it was safer to 
> remain true to the spec, as far as possible.
> The reasoning remains correct even today. But in view of the security hole 
> introduced by this change (that anyone can load up any other user's data in 
> hive), and keeping in mind that WASB does not intend to implement traverse 
> checks, we propose a compromise.
> We propose (re)introducing a read-access check to getFileStatus(), that would 
> check the existing ancestor for read-access whenever invoked. Although not 
> perfect (in that it is a departure from the spec), we believe that it is a 
> good compromise between having no checks at all; and implementing full-blown 
> traverse checks.
> For scenarios that deal with intermediate folders like mkdirs, the call would 
> check for read access against an existing ancestor (when invoked from shell) 
> for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" 
> exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" 
> }}. This can be thought of, as being a close-enough substitute for the 
> traverse checks that hdfs does.
> For other scenarios that don't deal with non-existent intermediate folders – 
> like read, delete etc, the check will happen against the parent. Once again, 
> we can think of the read-check against the parent as a substitute for the 
> traverse check, which can be customized for various users with ranger 
> policies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189564#comment-16189564
 ] 

Hadoop QA commented on HADOOP-14220:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | HADOOP-14220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890139/HADOOP-14220-branch-2-017.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4127c71ebd2 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 8beae14 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13445/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13445/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13445/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve 

[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Status: Patch Available  (was: Open)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Attachment: HADOOP-14220-branch-2-017.patch

somehow patch 016 got another patch mixed in...my own fault for trying to 
cherry pick.


patch 017: rebuilt patch

Testing: local & remote DDB, s3 ireland

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-branch-2-017.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Target Version/s: 2.9.0, 3.0.0-beta1  (was: 3.0.0-beta1)
  Status: Open  (was: Patch Available)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2017-10-03 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189535#comment-16189535
 ] 

Masatake Iwasaki commented on HADOOP-12672:
---

Feel free to file a new JIRA to fix the value.

bq. Both Client#getTimeout and Client#geRpcTimeout are not used really, only a 
unit test calls that.

How about NameNodeProxiesClient#createNonHAProxyWithClientProtocol? You should 
check whether [the above 
comment|https://issues.apache.org/jira/browse/HADOOP-12672?focusedCommentId=15192023=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15192023]
 still holds in the current code.


> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch, HADOOP-12672.005.patch, 
> HADOOP-12672.006.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189533#comment-16189533
 ] 

Hadoop QA commented on HADOOP-14220:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-14220 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890133/HADOOP-14220-branch-2-016.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13444/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189527#comment-16189527
 ] 

Steve Loughran edited comment on HADOOP-14220 at 10/3/17 10:37 AM:
---

patch 016; patch 015 backported to branch-2. Usual test tweaks: closures -> 
callable, final references to variables

tested: s3a ireland, dynamodb, w/ SSE-KMS to round things out


was (Author: ste...@apache.org):
patch 016; patch 015 backported to branch-2. Usual test tweaks: closures -> 
callable, final references to variables

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Attachment: HADOOP-14220-branch-2-016.patch

patch 016; patch 015 backported to branch-2. Usual test tweaks: closures -> 
callable, final references to variables

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14220:
-

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Status: Patch Available  (was: Reopened)

> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-015.patch, HADOOP-14220-016.patch, 
> HADOOP-14220-branch-2-016.patch, HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, 
> HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189478#comment-16189478
 ] 

Hadoop QA commented on HADOOP-13835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
39s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | HADOOP-13835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890123/HADOOP-13835.branch-2.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  cc  |
| uname | Linux abd89b0ed5da 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 8beae14 |
| Default Java | 1.7.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13443/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13443/testReport/ |
| modules | C: hadoop-common-project/hadoop-common . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13443/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun 

[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189447#comment-16189447
 ] 

Steve Loughran commented on HADOOP-14845:
-

thanks. I've been doing some merge work internally too...its combination of the 
(welcome) cleanup work in the other patch & the move to parallel test runs & 
needing to rename all tests against live endpoints ITest*.

> Azure wasb: getFileStatus not making any auth checks
> 
>
> Key: HADOOP-14845
> URL: https://issues.apache.org/jira/browse/HADOOP-14845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0
>
> Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, 
> HADOOP-14845.003.patch, HADOOP-14845-branch-2-001.patch.txt, 
> HADOOP-14845-branch-2-002.patch, HADOOP-14845-branch-2-003.patch
>
>
> The HDFS spec requires only traverse checks for any file accessed via 
> getFileStatus ... and since WASB does not support traverse checks, removing 
> this call effectively removed all protections for the getFileStatus call. The 
> reasoning at that time was that doing a performAuthCheck was the wrong thing 
> to do, since it was going against the specand that the correct fix to the 
> getFileStatus issue was to implement traverse checks rather than go against 
> the spec by calling performAuthCheck. The side-effects of such a change were 
> not fully clear at that time, but the thinking was that it was safer to 
> remain true to the spec, as far as possible.
> The reasoning remains correct even today. But in view of the security hole 
> introduced by this change (that anyone can load up any other user's data in 
> hive), and keeping in mind that WASB does not intend to implement traverse 
> checks, we propose a compromise.
> We propose (re)introducing a read-access check to getFileStatus(), that would 
> check the existing ancestor for read-access whenever invoked. Although not 
> perfect (in that it is a departure from the spec), we believe that it is a 
> good compromise between having no checks at all; and implementing full-blown 
> traverse checks.
> For scenarios that deal with intermediate folders like mkdirs, the call would 
> check for read access against an existing ancestor (when invoked from shell) 
> for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" 
> exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" 
> }}. This can be thought of, as being a close-enough substitute for the 
> traverse checks that hdfs does.
> For other scenarios that don't deal with non-existent intermediate folders – 
> like read, delete etc, the check will happen against the parent. Once again, 
> we can think of the read-check against the parent as a substitute for the 
> traverse check, which can be customized for various users with ranger 
> policies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) remove the Local Dynamo DB test option

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189443#comment-16189443
 ] 

Steve Loughran commented on HADOOP-14918:
-

OK. I only just go "" or "-Ds3guard -Ddynamo", but yes, if we can retain 
coverage, then yes, keep it around. 

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> Straightforward to remove.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14921) Conflicts when starting daemons with the same name

2017-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189440#comment-16189440
 ] 

Steve Loughran commented on HADOOP-14921:
-

hdfs-router?

> Conflicts when starting daemons with the same name
> --
>
> Key: HADOOP-14921
> URL: https://issues.apache.org/jira/browse/HADOOP-14921
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>
> HDFS-10467 is adding a {{Router}} while YARN already added one for YARN-2915. 
> Both of them are ultimately started from {{hadoop-functions.sh}}. For the PID 
> file both of them end up using {{/tmp/hadoop-hadoop-router.pid}}. I propose 
> to use the command name also for the PID file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14922) Build of Mapreduce Native Task module fails with unknown opcode "bswap"

2017-10-03 Thread Anup Halarnkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189411#comment-16189411
 ] 

Anup Halarnkar commented on HADOOP-14922:
-

Hi,

This error is due to missing bswap opcode in Power. I checked primitives.h 
header file which has definition for bswap32 and bswap64 for other 
architectures. So, I added a definition for power. I have used macros 
"__builtin_bswap32()" and "__builtin_bswap64()" which are part of GCC builtin's.

I verified the assembly output of the above macros on my power machine and then 
went ahead and created a patch.

Please find patch added for resolving this issue!


> Build of Mapreduce Native Task module fails with unknown opcode "bswap"
> ---
>
> Key: HADOOP-14922
> URL: https://issues.apache.org/jira/browse/HADOOP-14922
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
> Environment: OS: Ubuntu 14.04
> Arch: PPC64LE
>Reporter: Anup Halarnkar
> Fix For: 3.0.0-alpha3
>
> Attachments: ppc-bswap-fix.patch
>
>
> [WARNING] /tmp/cckBBdQp.s: Assembler messages:
> [WARNING] /tmp/cckBBdQp.s:3127: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cckBBdQp.s:3152: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/BlockCodec.cc.o] Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccqRfBZp.s: Assembler messages:
> [WARNING] /tmp/ccqRfBZp.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccqRfBZp.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/cc50B5Mp.s: Assembler messages:
> [WARNING] /tmp/cc50B5Mp.s:3112: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cc50B5Mp.s:3137: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/BlockCodec.cc.o] 
> Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccobJqOY.s: Assembler messages:
> [WARNING] /tmp/ccobJqOY.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccobJqOY.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/ccdaQ1CY.s: Assembler messages:
> [WARNING] /tmp/ccdaQ1CY.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccdaQ1CY.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s: Assembler messages:
> [WARNING] /tmp/ccRwHt5X.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/SnappyCodec.cc.o] Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/SnappyCodec.cc.o] 
> Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask_static.dir/all] Error 2
> [WARNING] make: *** [all] Error 2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14922) Build of Mapreduce Native Task module fails with unknown opcode "bswap"

2017-10-03 Thread Anup Halarnkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anup Halarnkar updated HADOOP-14922:

Attachment: ppc-bswap-fix.patch

> Build of Mapreduce Native Task module fails with unknown opcode "bswap"
> ---
>
> Key: HADOOP-14922
> URL: https://issues.apache.org/jira/browse/HADOOP-14922
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
> Environment: OS: Ubuntu 14.04
> Arch: PPC64LE
>Reporter: Anup Halarnkar
> Fix For: 3.0.0-alpha3
>
> Attachments: ppc-bswap-fix.patch
>
>
> [WARNING] /tmp/cckBBdQp.s: Assembler messages:
> [WARNING] /tmp/cckBBdQp.s:3127: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cckBBdQp.s:3152: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/BlockCodec.cc.o] Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccqRfBZp.s: Assembler messages:
> [WARNING] /tmp/ccqRfBZp.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccqRfBZp.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/cc50B5Mp.s: Assembler messages:
> [WARNING] /tmp/cc50B5Mp.s:3112: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cc50B5Mp.s:3137: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/BlockCodec.cc.o] 
> Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccobJqOY.s: Assembler messages:
> [WARNING] /tmp/ccobJqOY.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccobJqOY.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/ccdaQ1CY.s: Assembler messages:
> [WARNING] /tmp/ccdaQ1CY.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccdaQ1CY.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s: Assembler messages:
> [WARNING] /tmp/ccRwHt5X.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/SnappyCodec.cc.o] Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/SnappyCodec.cc.o] 
> Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask_static.dir/all] Error 2
> [WARNING] make: *** [all] Error 2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14922) Build of Mapreduce Native Task module fails with unknown opcode "bswap"

2017-10-03 Thread Anup Halarnkar (JIRA)
Anup Halarnkar created HADOOP-14922:
---

 Summary: Build of Mapreduce Native Task module fails with unknown 
opcode "bswap"
 Key: HADOOP-14922
 URL: https://issues.apache.org/jira/browse/HADOOP-14922
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha3
 Environment: OS: Ubuntu 14.04
Arch: PPC64LE

Reporter: Anup Halarnkar
 Fix For: 3.0.0-alpha3


[WARNING] /tmp/cckBBdQp.s: Assembler messages:
[WARNING] /tmp/cckBBdQp.s:3127: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/cckBBdQp.s:3152: Error: unrecognized opcode: `bswap'
[WARNING] make[2]: *** 
[CMakeFiles/nativetask.dir/main/native/src/codec/BlockCodec.cc.o] Error 1
[WARNING] make[2]: *** Waiting for unfinished jobs
[WARNING] /tmp/ccqRfBZp.s: Assembler messages:
[WARNING] /tmp/ccqRfBZp.s:2098: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/ccqRfBZp.s:2123: Error: unrecognized opcode: `bswap'
[WARNING] make[2]: *** 
[CMakeFiles/nativetask.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
[WARNING] /tmp/cc50B5Mp.s: Assembler messages:
[WARNING] /tmp/cc50B5Mp.s:3112: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/cc50B5Mp.s:3137: Error: unrecognized opcode: `bswap'
[WARNING] make[2]: *** 
[CMakeFiles/nativetask_static.dir/main/native/src/codec/BlockCodec.cc.o] Error 1
[WARNING] make[2]: *** Waiting for unfinished jobs
[WARNING] /tmp/ccobJqOY.s: Assembler messages:
[WARNING] /tmp/ccobJqOY.s:2098: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/ccobJqOY.s:2123: Error: unrecognized opcode: `bswap'
[WARNING] make[2]: *** 
[CMakeFiles/nativetask_static.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
[WARNING] /tmp/ccdaQ1CY.s: Assembler messages:
[WARNING] /tmp/ccdaQ1CY.s:2235: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/ccdaQ1CY.s:2249: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/ccRwHt5X.s: Assembler messages:
[WARNING] /tmp/ccRwHt5X.s:2235: Error: unrecognized opcode: `bswap'
[WARNING] /tmp/ccRwHt5X.s:2249: Error: unrecognized opcode: `bswap'
[WARNING] make[2]: *** 
[CMakeFiles/nativetask.dir/main/native/src/codec/SnappyCodec.cc.o] Error 1
[WARNING] make[1]: *** [CMakeFiles/nativetask.dir/all] Error 2
[WARNING] make[1]: *** Waiting for unfinished jobs
[WARNING] make[2]: *** 
[CMakeFiles/nativetask_static.dir/main/native/src/codec/SnappyCodec.cc.o] Error 
1
[WARNING] make[1]: *** [CMakeFiles/nativetask_static.dir/all] Error 2
[WARNING] make: *** [all] Error 2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-10-03 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189401#comment-16189401
 ] 

Varun Vasudev commented on HADOOP-13835:


[~ajisakaa] - can you please review the latest patch for branch-2? It addresses 
your review comments. Also, if it looks good to you, can you commit it to 
branch-2, branch-2.9, and branch-2.8? Thanks!

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-10-03 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-13835:
---
Attachment: HADOOP-13835.branch-2.008.patch

Uploaded patch for branch-2.

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch, HADOOP-13835.branch-2.008.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2017-10-03 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189326#comment-16189326
 ] 

Yufei Gu commented on HADOOP-12672:
---

Thanks [~iwasakims]. I think that *zero means infinity* is counter-intuitive 
and error-prone through Java lib does that as well. Zero could be considered as 
a valid timeout value, while negative one isn't. It's kinda of like the debut 
of if zero is the natural number. If we use zero to represent infinity, which 
number could be used to represent no timeout? 

Both Client#getTimeout and Client#geRpcTimeout are not used really, only a unit 
test calls that. They probably are fine to change. 

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch, HADOOP-12672.005.patch, 
> HADOOP-12672.006.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14916) Replace HdfsFileStatus constructor with a builder pattern.

2017-10-03 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14916 started by Bharat Viswanadham.
---
> Replace HdfsFileStatus constructor with a builder pattern.
> --
>
> Key: HADOOP-14916
> URL: https://issues.apache.org/jira/browse/HADOOP-14916
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14920) KMSClientProvider won't work with KMS delegation token retrieved from non-Java client.

2017-10-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189292#comment-16189292
 ] 

Xiaoyu Yao edited comment on HADOOP-14920 at 10/3/17 6:07 AM:
--

Thanks [~xiaochen] for the pointers. I briefly looked into the discussion on 
HADOOP-14445. I seems that we agree the client should be able to set the 
service as needed based on [~daryn]'s latest summary "client gets tokens and 
sets service to kp uri"

The issue here is we want to enable *non-java client* like curl+http to set the 
service for its requested delegation token, which is orthogonal to HADOOP-14445 
to support delegation token with KMS-HA. For non-java client, we will need this 
even with HADOOP-14445 unless we remove the service check in KMSClientProvider 
completely which is the solution 2 mentioned in the description.



was (Author: xyao):
Thanks [~xiaochen] for the pointers. I briefly looked into the discussion on 
HADOOP-14445. I seems that we agree the client should be able to set the 
service as needed based on [~daryn]'s latest summary "client gets tokens and 
sets service to kp uri"

The issue here is orthogonal to HADOOP-14445 where we want to enable *non-java 
client* like curl+http to set the service for its requested delegation token. 
This will be needed for non-java client even with HADOOP-14445 unless we remove 
the service check in KMSClientProvider completely which is the solution 2 
mentioned in the description.


> KMSClientProvider won't work with KMS delegation token retrieved from 
> non-Java client.
> --
>
> Key: HADOOP-14920
> URL: https://issues.apache.org/jira/browse/HADOOP-14920
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-14920.001.patch
>
>
> HADOOP-13381 added support to use KMS delegation token to connect to KMS 
> server for key operations. However, the logic to check if the UGI container 
> KMS delegation token assumes that the token must contain a service attribute. 
> Otherwise, a KMS delegation token won't be recognized.
> For delegation token obtained via non-java client such curl (http), the 
> default DelegationTokenAuthenticationHandler only support *renewer* parameter 
> and assume the client itself will add the service attribute. This makes a 
> java client with KMSClientProvdier can't use for KMS delegation token 
> retrieved form non-java client because the token does not contain a service 
> attribute. 
> I did some investigation on this and found two solutions:
> 1. Similar use case exists for webhdfs, and webhdfs supports it with a 
> ["service" 
> parameter|https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Delegation_Token].
> We can do this similarly by allowing client to specify a service attribute in 
> the request URL and included in the token returned like webhdfs. Even though 
> this will change in DelegationTokenAuthenticationHandler and may affect many 
> other web component,  this seems to be a clean and low risk solution because 
> it will be an optional parameter. Also, other components get non-java client 
> interop support for free if they have the similar use case. 
> 2. The other way to solve this is to release the token check in 
> KMSClientProvider to check only the token kind instead of the service.  This 
> is an easy work around but seems less optimal to me. 
> cc: [~xiaochen] for additional input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14845) Azure wasb: getFileStatus not making any auth checks

2017-10-03 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189299#comment-16189299
 ] 

Sivaguru Sankaridurg commented on HADOOP-14845:
---

[~asuresh], [~steve_l]. I'll try fwd porting, fix the tests and submit another 
patch for trunk.

> Azure wasb: getFileStatus not making any auth checks
> 
>
> Key: HADOOP-14845
> URL: https://issues.apache.org/jira/browse/HADOOP-14845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0
>
> Attachments: HADOOP-14845.001.patch, HADOOP-14845.002.patch, 
> HADOOP-14845.003.patch, HADOOP-14845-branch-2-001.patch.txt, 
> HADOOP-14845-branch-2-002.patch, HADOOP-14845-branch-2-003.patch
>
>
> The HDFS spec requires only traverse checks for any file accessed via 
> getFileStatus ... and since WASB does not support traverse checks, removing 
> this call effectively removed all protections for the getFileStatus call. The 
> reasoning at that time was that doing a performAuthCheck was the wrong thing 
> to do, since it was going against the specand that the correct fix to the 
> getFileStatus issue was to implement traverse checks rather than go against 
> the spec by calling performAuthCheck. The side-effects of such a change were 
> not fully clear at that time, but the thinking was that it was safer to 
> remain true to the spec, as far as possible.
> The reasoning remains correct even today. But in view of the security hole 
> introduced by this change (that anyone can load up any other user's data in 
> hive), and keeping in mind that WASB does not intend to implement traverse 
> checks, we propose a compromise.
> We propose (re)introducing a read-access check to getFileStatus(), that would 
> check the existing ancestor for read-access whenever invoked. Although not 
> perfect (in that it is a departure from the spec), we believe that it is a 
> good compromise between having no checks at all; and implementing full-blown 
> traverse checks.
> For scenarios that deal with intermediate folders like mkdirs, the call would 
> check for read access against an existing ancestor (when invoked from shell) 
> for intermediate non-existent folders – {{ mkdirs /foo/bar, where only "/" 
> exists, would result in read-checks against "/" for "/","/foo" and "/foo/bar" 
> }}. This can be thought of, as being a close-enough substitute for the 
> traverse checks that hdfs does.
> For other scenarios that don't deal with non-existent intermediate folders – 
> like read, delete etc, the check will happen against the parent. Once again, 
> we can think of the read-check against the parent as a substitute for the 
> traverse check, which can be customized for various users with ranger 
> policies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org