[jira] [Commented] (HADOOP-13114) DistCp should have option to compress data on write

2017-01-13 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822651#comment-15822651
 ] 

Joep Rottinghuis commented on HADOOP-13114:
---

I have similar concerns to the ones raised, a copy shouldn't change the format.

It seems that the patch doesn't allow to use both -update and compress at the 
same time. What if the copy was done first with -compress, then a user wants to 
switch to -update and then changes their job to remove the -compress and switch 
to the -update. It will result in all files getting copied again right?

In the current approach the compression seems to happen on the write-side. That 
means that for copies across expensive network (such as cross-dc copies) the 
data still travels uncompressed first.
Wouldn't it make sense to create wrapper functionality to first compress on the 
source, then use regular distcp? Possibly the compressed temporary data could 
be in a /tmp directory structure. Alternatively one can still distcp first (to 
a tmp location) and then compress if that is desired. The advantage to keep the 
compression step separate from the distcp step is that one could additionally 
collapse files together into fewer files if possible.

We're finding that our users already have a hard time dealing with the 
intricacies of interactions of various distcp flags (-atomic, -update, etc.).

> DistCp should have option to compress data on write
> ---
>
> Key: HADOOP-13114
> URL: https://issues.apache.org/jira/browse/HADOOP-13114
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Suraj Nayak
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13114.05.patch, HADOOP-13114.06.patch, 
> HADOOP-13114-trunk_2016-05-07-1.patch, HADOOP-13114-trunk_2016-05-08-1.patch, 
> HADOOP-13114-trunk_2016-05-10-1.patch, HADOOP-13114-trunk_2016-05-12-1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DistCp utility should have capability to store data in user specified 
> compression format. This avoids one hop of compressing data after transfer. 
> Backup strategies to different cluster also get benefit of saving one IO 
> operation to and from HDFS, thus saving resources, time and effort.
> * Create an option -compressOutput defaulting to 
> {{org.apache.hadoop.io.compress.BZip2Codec}}. 
> * Users will be able to change codec with {{-D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec}}
> * If distcp compression is enabled, suffix the filenames with default codec 
> extension to indicate the file is compressed. Thus users can be aware of what 
> codec was used to compress the data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822649#comment-15822649
 ] 

Hadoop QA commented on HADOOP-13990:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13990 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847461/HADOOP-13990.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f1313dc04bd8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11439/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document KMS use of CredentialProvider API
> --
>
> Key: HADOOP-13990
> URL: https://issues.apache.org/jira/browse/HADOOP-13990
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13990.001.patch
>
>
> Document that HADOOP-13597 enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13990:

Priority: Minor  (was: Trivial)

> Document KMS use of CredentialProvider API
> --
>
> Key: HADOOP-13990
> URL: https://issues.apache.org/jira/browse/HADOOP-13990
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13990.001.patch
>
>
> Document that HADOOP-13597 enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13990:

Status: Patch Available  (was: Open)

> Document KMS use of CredentialProvider API
> --
>
> Key: HADOOP-13990
> URL: https://issues.apache.org/jira/browse/HADOOP-13990
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HADOOP-13990.001.patch
>
>
> Document that HADOOP-13597 enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13990:

Attachment: HADOOP-13990.001.patch

Patch 001
* Update CredentialProviderAPI.md and index.md.vm

> Document KMS use of CredentialProvider API
> --
>
> Key: HADOOP-13990
> URL: https://issues.apache.org/jira/browse/HADOOP-13990
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HADOOP-13990.001.patch
>
>
> Document that HADOOP-13597 enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822592#comment-15822592
 ] 

Hadoop QA commented on HADOOP-13673:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} The patch generated 0 new + 108 unchanged - 12 fixed 
= 108 total (was 120) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
16s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-mapreduce-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847441/HADOOP-13673.04.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 57303bef37c4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| shellcheck | v0.4.5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11437/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11437/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11437/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch, HADOOP-13673.03.patch, HADOOP-13673.04.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can 

[jira] [Commented] (HADOOP-13989) Fix typo in hadoop-client shade configuration

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822591#comment-15822591
 ] 

Hadoop QA commented on HADOOP-13989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847454/HADOOP-13989.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 21a836a7e904 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11438/testReport/ |
| modules | C: hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11438/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Priority: Minor
> 

[jira] [Updated] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13990:

Description: Document that HADOOP-13597 enabled support for Credential 
Provider API.  (was: HADOOP-13597 actually enabled support for Credential 
Provider API.)

> Document KMS use of CredentialProvider API
> --
>
> Key: HADOOP-13990
> URL: https://issues.apache.org/jira/browse/HADOOP-13990
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Document that HADOOP-13597 enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13990) Document KMS use of CredentialProvider API

2017-01-13 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13990:
---

 Summary: Document KMS use of CredentialProvider API
 Key: HADOOP-13990
 URL: https://issues.apache.org/jira/browse/HADOOP-13990
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, kms
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial


HADOOP-13597 actually enabled support for Credential Provider API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Fix typo in hadoop-client shade configuration

2017-01-13 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HADOOP-13989:

Fix Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> Fix typo in hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13989.001.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Fix typo in hadoop-client shade configuration

2017-01-13 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HADOOP-13989:

Attachment: HADOOP-13989.001.patch

> Fix typo in hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822577#comment-15822577
 ] 

Hadoop QA commented on HADOOP-13877:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847437/HADOOP-13877-HADOOP-13345.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f80e10ab550e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 2220b78 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11436/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11436/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, 

[jira] [Created] (HADOOP-13989) Fix typo in hadoop-client shade configuration

2017-01-13 Thread Joe Pallas (JIRA)
Joe Pallas created HADOOP-13989:
---

 Summary: Fix typo in hadoop-client shade configuration
 Key: HADOOP-13989
 URL: https://issues.apache.org/jira/browse/HADOOP-13989
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha2
Reporter: Joe Pallas
Priority: Minor


The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
typo in the configuration of the shade module.  They say {{}} 
instead of {{}}.  (This was noticed by IntelliJ, but not by 
maven.)

Shade plugin doc is at 
[http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-13 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822564#comment-15822564
 ] 

Aaron Fabbri commented on HADOOP-13650:
---

Thanks for the followup patch [~eddyxu].  +1 on the code review.  I will try to 
get some testing in this evening or early next week.
{quote}
AF> S3A's listFiles discovers non-empty directories
Thanks for catching this. The comments are outdated now. Since 
LocatedFileStatus erased the isEmptyDir, the code here is still valid I think. 
I modified the comments.
{quote}

Ah.. Another reason the isEmptyDirectory bit should probably be ignored by 
MetadataStore.  This will get addressed in HADOOP-13914, so we're good here.

{quote}
 AF> Should we add to dirCache here?
dirCache is used in putParentsIfNotPresent(child); after this statement.
{quote}

Understood, you put the *parent* in the dirCache there.  In this code though, 
you are putting the "child" dir in MS, so you could also remember that the 
child dir is already in MS.  This current code might put the "child" dir in MS 
two times (once here and again when you add its children), depending on 
iteration order of listFiles().  This does not affect correctness (it is just 
perf optimization), so I'm still +1 on this patch.

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch, HADOOP-13650-HADOOP-13345.002.patch, 
> HADOOP-13650-HADOOP-13345.003.patch, HADOOP-13650-HADOOP-13345.004.patch, 
> HADOOP-13650-HADOOP-13345.005.patch, HADOOP-13650-HADOOP-13345.006.patch, 
> HADOOP-13650-HADOOP-13345.007.patch, HADOOP-13650-HADOOP-13345.008.patch, 
> HADOOP-13650-HADOOP-13345.009.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822563#comment-15822563
 ] 

Greg Senia commented on HADOOP-13988:
-

 [~lmccay] I will fix shortly!

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13977) IntelliJ Compilation error in ITUseMiniCluster.java

2017-01-13 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-13977:

Attachment: build.log

[~busbey], I attached the logs requested.

> IntelliJ Compilation error in ITUseMiniCluster.java
> ---
>
> Key: HADOOP-13977
> URL: https://issues.apache.org/jira/browse/HADOOP-13977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Sean Busbey
> Attachments: build.log
>
>
> The repro steps:
> mvn clean install -DskipTests and then "Build/Build Project" in IntelliJ IDEA 
> to update indexes, etc.
> ...hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
> Error:(34, 28) java: package org.apache.hadoop.fs does not exist
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13823) s3a rename: fail if dest file exists

2017-01-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822547#comment-15822547
 ] 

Ashutosh Chauhan commented on HADOOP-13823:
---

[~ste...@apache.org] You mentioned in description that you intend to have same 
behavior for s3a rename as HDFS. However, both HDFS and azure returns false 
when dest file already exists, but in this patch you are instead throwing 
exception in that case.  
Further, this changed semantics doesn't help Hive since even with this fix we 
need to handle HDFS and S3a differently because of return false vs throw 
exception. Was that intentional?

> s3a rename: fail if dest file exists
> 
>
> Key: HADOOP-13823
> URL: https://issues.apache.org/jira/browse/HADOOP-13823
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13813-branch-2-001.patch, 
> HADOOP-13823-branch-2-002.patch
>
>
> HIVE-15199 shows that s3a allows rename onto an existing file, which is 
> something HDFS, azure and s3n do not permit (though file:// does). This is 
> breaking bits of Hive, is an inconsistency with HDFS and a regression 
> compared to s3n semantics.
> I propose: rejecting the rename on a file -> file rename if the destination 
> exists (easy) and changing the s3a.xml contract file to declare the behavior 
> change; this is needed for 
> {{AbstractContractRenameTest.testRenameFileOverExistingFile}} to handle the 
> changed semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822531#comment-15822531
 ] 

Hadoop QA commented on HADOOP-13945:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 8 new + 43 unchanged - 
0 fixed = 51 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-tools/hadoop-azure generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 27s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  org.apache.hadoop.fs.azure.NativeAzureFileSystem.KEY_AZURE_AUTHORIZATION 
isn't final but should be  At NativeAzureFileSystem.java:be  At 
NativeAzureFileSystem.java:[line 1114] |
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847426/HADOOP-13945.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 96e6552fc383 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| Default Java | 

[jira] [Updated] (HADOOP-13673) Update scripts to be smarter when running with privilege

2017-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Attachment: HADOOP-13673.04.patch

-04:
* rebase
* spelling fixes
* test for symlinks
* some whitespace fixes
* more documentation + fixes

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch, HADOOP-13673.03.patch, HADOOP-13673.04.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2017-01-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822525#comment-15822525
 ] 

Allen Wittenauer commented on HADOOP-13673:
---

Thanks for the feedback [~raviprak] and [~andrew.wang] (who did his offline 
while JIRA was down).  -04 should cover all of the very valid points you've 
raised.

bq. I'm not exactly sure how HADOOP_REEXECED_CMD is being used to prevent a 
fork bomb, but could a script set it to false explicitly as part of itself? 
i.e. what's preventing access to that variable from a user script?

Anything that runs inside the environment can of course wreak havoc on 
anything.  If we ignore bad actors, what happens is this:

1. user runs command 
2. command determines that _USER has been set and it needs to get re-executed 
as a different user.
3. command calls itself with same parameters, etc, but adds --reexec to the 
command line
4. if for some reason command calls itself again, there will be two --reexec's 
on the command line (since those options aren't stripped) which will stop it 
during the param parasing.  Additionally, hadoop_need_reexec will return false 
as well.

Sure, it's not as strong as a semaphore, but I think it should stop most 
non-malicious code.

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch, HADOOP-13673.03.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13877:
--
Status: Patch Available  (was: Open)

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-13 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822506#comment-15822506
 ] 

Aaron Fabbri edited comment on HADOOP-13877 at 1/13/17 11:15 PM:
-

Attaching v3 patch.  Adds [~liuml07]'s suggestions to also fix the 
double-creation of the test contract, instead getting the reference from the 
superclass.  From v2 patch: Rebased on updated feature branch, and fixes 
checkstyle issues.  Also fixes a new failure in in 
testInitializeWithConfiguration() when fs.s3a.s3guard.ddb.table is set in 
config.


was (Author: fabbri):
Attaching v3 patch.  Adds [~liuml07]'s suggestions to also fix the 
double-creation of the test contract, instead getting the reference from the 
superclass.  From v2 patch: Rebased on latest trunk, and fixes checkstyle 
issues.

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13877:
--
Status: Open  (was: Patch Available)

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-13 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13877:
--
Attachment: HADOOP-13877-HADOOP-13345.003.patch

Attaching v3 patch.  Adds [~liuml07]'s suggestions to also fix the 
double-creation of the test contract, instead getting the reference from the 
superclass.  From v2 patch: Rebased on latest trunk, and fixes checkstyle 
issues.

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-01-13 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: HADOOP-13945.3.patch

Adding a patch which is created on top of 
https://issues.apache.org/jira/secure/attachment/12845516/HADOOP-13930.002.patch,
Patch contains following additional changes,
-  Kerberos support for {{RemoteWasbAuthorizerImpl#authorize()}} request to 
remote server.
- Added a configuration property {{fs.azure.enable.kerberos.support}} to 
enable/disable Kerberos support.
- Support for impersonation(doAs) of a user with remote service.

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.1.patch, HADOOP-13945.2.patch, 
> HADOOP-13945.3.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822408#comment-15822408
 ] 

Larry McCay commented on HADOOP-13988:
--

This has a type too:

{noformat}
+// Check if the realUser patches the user used by process
{noformat}

s/patches/matches/

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822404#comment-15822404
 ] 

Larry McCay edited comment on HADOOP-13988 at 1/13/17 9:54 PM:
---

Looks like findbugs flagged the following:

{noformat}
+if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
{noformat}

That should use an !equals() call - right?
May need to revisit that for your cluster.


was (Author: lmccay):
Looks like findbugs flagged the following:

{noformat}
+if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
{noformat}

That should use an !equals() call - right?


> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13970) garbage data read from the beginning of a tar file

2017-01-13 Thread Steve Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822403#comment-15822403
 ] 

Steve Yang commented on HADOOP-13970:
-

Not really, the contents of the data files are read in correctly. The only 
garbage text comes at the beginning of the first line.

> garbage data read from the beginning of a tar file
> --
>
> Key: HADOOP-13970
> URL: https://issues.apache.org/jira/browse/HADOOP-13970
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
> Attachments: taxi_simplified.tar
>
>
> Hadoop from CDH 5.7.1
> on Spark using databricks ('com.databricks:spark-csv_2.10:1.5.0') to read in 
> a tar file which consists of 3 .csv files. 
> sqlCtx.read().format("com.databricks.spark.csv").option(...)
> .load(objectName);
> The tar file contains 3 files:
> taxi_simplified1.csv
> taxi2.csv
> simplified3.csv
> where the first line (header) is:
> trip_distance,dropoff_datetime,dropoff_geocode,passenger_count,medallion,rate_code,tip_amount,total_amount,store_and_fwd_flag,mta_tax,pickup_geocode,trip_time_in_secs,surcharge,vendor_id,tolls_amount,fare_amount,pickup_datetime,hack_license,payment_type,ordertime
> Note the first column header is "trip_distance". But the read data shows:
> taxi_simplified1.csv^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@644^@0010013^@3001121^@0046004^@13002371150^@013521^@
>  
> 0^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ustar
>   
> ^@optitest^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@trip_distance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822404#comment-15822404
 ] 

Larry McCay commented on HADOOP-13988:
--

Looks like findbugs flagged the following:

{noformat}
+if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
{noformat}

That should use an !equals() call - right?


> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822348#comment-15822348
 ] 

Hadoop QA commented on HADOOP-13988:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 14 unchanged - 0 fixed = 18 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 15 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi()   At 
KMSClientProvider.java:== or != in 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi()   At 
KMSClientProvider.java:[line 1113] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847413/HADOOP-13988.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bb3fca653a8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HADOOP-13986) UGI.UgiMetrics.renewalFailureTotal is not printable

2017-01-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822323#comment-15822323
 ] 

Wei-Chiu Chuang commented on HADOOP-13986:
--

Yeah that's certainly the better approach to fix this kind of mistakes once and 
for all.

> UGI.UgiMetrics.renewalFailureTotal is not printable
> ---
>
> Key: HADOOP-13986
> URL: https://issues.apache.org/jira/browse/HADOOP-13986
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> The metrics (renewalFailures and renewalFailuresTotal) in the  following code 
> snippet are not printable.
> {code:title=UserGroupInformation.java}
> metrics.renewalFailuresTotal.incr();
> final long tgtEndTime = tgt.getEndTime().getTime();
> LOG.warn("Exception encountered while running the renewal "
> + "command for {}. (TGT end time:{}, renewalFailures: {},"
> + "renewalFailuresTotal: {})", getUserName(), tgtEndTime,
> metrics.renewalFailures, metrics.renewalFailuresTotal, ie);
> {code}
> The output of the code is like the following:
> {quote}
> 2017-01-12 12:23:14,062 WARN  security.UserGroupInformation 
> (UserGroupInformation.java:run(1012)) - Exception encountered while running 
> the renewal command for f...@example.com. (TGT end time:148425260, 
> renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@323aa7f9,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@c8af058)
> ExitCodeException exitCode=1: kinit: krb5_cc_get_principal: No credentials 
> cache file found
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822301#comment-15822301
 ] 

Greg Senia commented on HADOOP-13988:
-

This patch requires these JIRAs to be included also

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Attachment: HADOOP-13988.patch

Initial Patch that is running in our test environment right now across 25 nodes.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Affects Version/s: 2.8.0
   Status: Patch Available  (was: Open)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3, 2.8.0
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13978) Update project release notes for 3.0.0-alpha2

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822247#comment-15822247
 ] 

Hadoop QA commented on HADOOP-13978:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13978 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847398/HADOOP-13978.002.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 0129c1973541 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3170f9 |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11433/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update project release notes for 3.0.0-alpha2
> -
>
> Key: HADOOP-13978
> URL: https://issues.apache.org/jira/browse/HADOOP-13978
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13978.001.patch, HADOOP-13978.002.patch
>
>
> Let's update the website release notes for 3.0.0-alpha2's changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13978) Update project release notes for 3.0.0-alpha2

2017-01-13 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-13978:
-
Attachment: HADOOP-13978.002.patch

Updating the patch to add a note on Opportunistic containers and Distributed 
Scheduling.
This feature spans a couple of umbrella JIRAs. We had added release notes on 
YARN-2877. Will add to the remaining JIRAs as well.

> Update project release notes for 3.0.0-alpha2
> -
>
> Key: HADOOP-13978
> URL: https://issues.apache.org/jira/browse/HADOOP-13978
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13978.001.patch, HADOOP-13978.002.patch
>
>
> Let's update the website release notes for 3.0.0-alpha2's changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13988:
-
Description: 
After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider issues 
have not been resolved. We put a test build together and applied HADOOP-13558 
and HADOOP-13749 these two fixes did still not solve the issue with requests 
coming from WebHDFS through to Knox to a TDE zone.

So we added some debug to our build and determined effectively what is 
happening here is a double proxy situation which does not seem to work. So we 
propose the following fix in getActualUgi Method:

{noformat}
 }
 // Use current user by default
 UserGroupInformation actualUgi = currentUgi;
 if (currentUgi.getRealUser() != null) {
   // Use real user for proxy user
   if (LOG.isDebugEnabled()) {
   LOG.debug("using RealUser for proxyUser);
}
   actualUgi = currentUgi.getRealUser();
   if (getDoAsUser() != null) {
  if (LOG.isDebugEnabled()) {
LOG.debug("doAsUser exists");
LOG.debug("currentUGI realUser shortName: {}", 
currentUgi.getRealUser().getShortUserName());
LOG.debug("processUGI loginUser shortName: {}", 
UserGroupInformation.getLoginUser().getShortUserName());
  }
  if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
  if (LOG.isDebugEnabled()) {
LOG.debug("currentUGI.realUser does not match 
UGI.processUser);
  }
  actualUgi = UserGroupInformation.getLoginUser();
  if (LOG.isDebugEnabled()) {
LOG.debug("LoginUser for Proxy: {}", 
actualUgi.getLoginUser());
  }
  }
   }

 } else if (!currentUgiContainsKmsDt() &&
 !currentUgi.hasKerberosCredentials()) {
   // Use login user for user that does not have either
   // Kerberos credential or KMS delegation token for KMS operations
   if (LOG.isDebugEnabled()) {
   LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
Credentials");
}
   actualUgi = currentUgi.getLoginUser();
 }
 return actualUgi;
   }

{noformat}

  was:
After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider issues 
have not been resolved. We put a test build together and applied HADOOP-13558 
and HADOOP-13749 these two fixes did still not solve the issue with requests 
coming from WebHDFS through to Knox to a TDE zone.

So we added some debug to our build and determined effectively what is 
happening here is a double proxy situation which does not seem to work. So we 
propose the following fix in getActualUgi Method:

 }
 // Use current user by default
 UserGroupInformation actualUgi = currentUgi;
 if (currentUgi.getRealUser() != null) {
   // Use real user for proxy user
   if (LOG.isDebugEnabled()) {
   LOG.debug("using RealUser for proxyUser);
}
   actualUgi = currentUgi.getRealUser();
   if (getDoAsUser() != null) {
  if (LOG.isDebugEnabled()) {
LOG.debug("doAsUser exists");
LOG.debug("currentUGI realUser shortName: {}", 
currentUgi.getRealUser().getShortUserName());
LOG.debug("processUGI loginUser shortName: {}", 
UserGroupInformation.getLoginUser().getShortUserName());
  }
  if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
  if (LOG.isDebugEnabled()) {
LOG.debug("currentUGI.realUser does not match 
UGI.processUser);
  }
  actualUgi = UserGroupInformation.getLoginUser();
  if (LOG.isDebugEnabled()) {
LOG.debug("LoginUser for Proxy: {}", 
actualUgi.getLoginUser());
  }
  }
   }

 } else if (!currentUgiContainsKmsDt() &&
 !currentUgi.hasKerberosCredentials()) {
   // Use login user for user that does not have either
   // Kerberos credential or KMS delegation token for KMS operations
   if (LOG.isDebugEnabled()) {
   LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
Credentials");
}
   actualUgi = currentUgi.getLoginUser();
 }
 return actualUgi;
   }


> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822170#comment-15822170
 ] 

Greg Senia commented on HADOOP-13988:
-

[~lmccay] and [~xyao] I have my original patch I will attach it and we can 
modify and test from there.



> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822131#comment-15822131
 ] 

Xiaoyu Yao commented on HADOOP-13988:
-

Thanks [~gss2002] for reporting the issue and propose the fix. The proposed fix 
makes sense to me. 
Based on that, I think we can simplify the change below assuming the proxy user 
from Hadoop service will always set the 
UserGroupInformation.AuthenticationMethod.PROXY while proxy user from client 
directly will not.

Also, we should add the additional tracing to UGI#logAllUserInfo(). 

{code}
 if (currentUgi.getRealUser() != null) {
  if (currentUgi.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.PROXY) {
// Use login user for proxy user from another proxy server
actualUgi = currentUgi.getLoginUser();
  } else {
// Use real user for proxy user from client directly
actualUgi = currentUgi.getRealUser();
  }
  }
{code}

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822062#comment-15822062
 ] 

John Zhuge edited comment on HADOOP-13987 at 1/13/17 6:27 PM:
--

Larry and I are discussing the pros and cons the following approaches to 
enhance {{SSLFactory#readSSLConfiguration}}:
* Read credential provider path. Whenever Credential Provider needs another 
property or any other {{Configuration}} change might affect reading SSL 
properties, remember to update this code.
* Create empty sslConf and add both {{ssl-MODE.xml}} and {{core-site.xml}} as 
configuration resource
* Create sslConf as a clone of {{SSLFactory#conf}} then add {{ssl-MODE.xml}} as 
configuration resource

Both 2 and 3 pull in lots of properties not needed for SSL. Any potential 
permission issues or name collisions?


was (Author: jzhuge):
Larry and I are discussing the pros and cons the following approaches to 
enhance {{SSLFactory#readSSLConfiguration}}:
1. Read credential provider path. Whenever Credential Provider needs another 
property or any other {{Configuration}} change might affect reading SSL 
properties, remember to update this code.
2. Still create empty sslConf and add both {{ssl-MODE.xml}} and 
{{core-site.xml}} as configuration resource
3. Create sslConf as a clone of {{SSLFactory#conf}} then add {{ssl-MODE.xml}} 
as configuration resource

Both 2 and 3 pull in lots of properties not needed for SSL. Any potential 
permission issues or name collisions?

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2017-01-13 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822113#comment-15822113
 ] 

Ravi Prakash commented on HADOOP-13673:
---

Hi Allen!

Thanks for the patch! It looks good. I only could find these nits:

# "Atempting" -> "Attempting"
# Remove "${EUID} comes from the shell itself!" in hadoop-functions.sh
# I'm not exactly sure how HADOOP_REEXECED_CMD is being used to prevent a fork 
bomb, but could a script set it to false explicitly as part of itself? i.e. 
what's preventing access to that variable from a user script?
#pwd
# Is hadoop_abs supposed to resolve links? If yes, in hadoop_abs.bats could you 
please add a test for links?

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch, HADOOP-13673.03.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822104#comment-15822104
 ] 

John Zhuge commented on HADOOP-13987:
-

Tricky use case if we pull in provider path:
* A central credential provider specified in {{core-site.xml}} which is the 
same across nodes. The provider contains SSL properties.
* Wish to use different SSL properties in {{ssl-MODE.xml}} on different nodes 
or just in different config directories, but {{getPassword}} always looks up 
provider first. So the only way to override SSL properties is to create a new 
provider storing different SSL properties and override the provider path.

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822096#comment-15822096
 ] 

Larry McCay commented on HADOOP-13988:
--

[~gss2002] - thank you for bringing this insight to a JIRA!

I have observed this double proxying issue before and I think this may actually 
help it in other areas as well.
Do you plan to provide a patch for it with appropriate tests as well?

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13986) UGI.UgiMetrics.renewalFailureTotal is not printable

2017-01-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822093#comment-15822093
 ] 

Steve Loughran commented on HADOOP-13986:
-

really those Gauge.toString() operators should return values; certainly we 
could do that for the various int/long gauges and counters. That would ensure 
that the evaluation was only if the log statement was needed

> UGI.UgiMetrics.renewalFailureTotal is not printable
> ---
>
> Key: HADOOP-13986
> URL: https://issues.apache.org/jira/browse/HADOOP-13986
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> The metrics (renewalFailures and renewalFailuresTotal) in the  following code 
> snippet are not printable.
> {code:title=UserGroupInformation.java}
> metrics.renewalFailuresTotal.incr();
> final long tgtEndTime = tgt.getEndTime().getTime();
> LOG.warn("Exception encountered while running the renewal "
> + "command for {}. (TGT end time:{}, renewalFailures: {},"
> + "renewalFailuresTotal: {})", getUserName(), tgtEndTime,
> metrics.renewalFailures, metrics.renewalFailuresTotal, ie);
> {code}
> The output of the code is like the following:
> {quote}
> 2017-01-12 12:23:14,062 WARN  security.UserGroupInformation 
> (UserGroupInformation.java:run(1012)) - Exception encountered while running 
> the renewal command for f...@example.com. (TGT end time:148425260, 
> renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@323aa7f9,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@c8af058)
> ExitCodeException exitCode=1: kinit: krb5_cc_get_principal: No credentials 
> cache file found
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-13 Thread Greg Senia (JIRA)
Greg Senia created HADOOP-13988:
---

 Summary: KMSClientProvider does not work with WebHDFS and Apache 
Knox w/ProxyUser
 Key: HADOOP-13988
 URL: https://issues.apache.org/jira/browse/HADOOP-13988
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, kms
Affects Versions: 2.7.3
 Environment: HDP 2.5.3.0 

WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
Reporter: Greg Senia


After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider issues 
have not been resolved. We put a test build together and applied HADOOP-13558 
and HADOOP-13749 these two fixes did still not solve the issue with requests 
coming from WebHDFS through to Knox to a TDE zone.

So we added some debug to our build and determined effectively what is 
happening here is a double proxy situation which does not seem to work. So we 
propose the following fix in getActualUgi Method:

 }
 // Use current user by default
 UserGroupInformation actualUgi = currentUgi;
 if (currentUgi.getRealUser() != null) {
   // Use real user for proxy user
   if (LOG.isDebugEnabled()) {
   LOG.debug("using RealUser for proxyUser);
}
   actualUgi = currentUgi.getRealUser();
   if (getDoAsUser() != null) {
  if (LOG.isDebugEnabled()) {
LOG.debug("doAsUser exists");
LOG.debug("currentUGI realUser shortName: {}", 
currentUgi.getRealUser().getShortUserName());
LOG.debug("processUGI loginUser shortName: {}", 
UserGroupInformation.getLoginUser().getShortUserName());
  }
  if (currentUgi.getRealUser().getShortUserName() != 
UserGroupInformation.getLoginUser().getShortUserName()) {
  if (LOG.isDebugEnabled()) {
LOG.debug("currentUGI.realUser does not match 
UGI.processUser);
  }
  actualUgi = UserGroupInformation.getLoginUser();
  if (LOG.isDebugEnabled()) {
LOG.debug("LoginUser for Proxy: {}", 
actualUgi.getLoginUser());
  }
  }
   }

 } else if (!currentUgiContainsKmsDt() &&
 !currentUgi.hasKerberosCredentials()) {
   // Use login user for user that does not have either
   // Kerberos credential or KMS delegation token for KMS operations
   if (LOG.isDebugEnabled()) {
   LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
Credentials");
}
   actualUgi = currentUgi.getLoginUser();
 }
 return actualUgi;
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822070#comment-15822070
 ] 

John Zhuge commented on HADOOP-13987:
-

Sorry Larry, our updates crossed paths.

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822062#comment-15822062
 ] 

John Zhuge commented on HADOOP-13987:
-

Larry and I are discussing the pros and cons the following approaches to 
enhance {{SSLFactory#readSSLConfiguration}}:
1. Read credential provider path. Whenever Credential Provider needs another 
property or any other {{Configuration}} change might affect reading SSL 
properties, remember to update this code.
2. Still create empty sslConf and add both {{ssl-MODE.xml}} and 
{{core-site.xml}} as configuration resource
3. Create sslConf as a clone of {{SSLFactory#conf}} then add {{ssl-MODE.xml}} 
as configuration resource

Both 2 and 3 pull in lots of properties not needed for SSL. Any potential 
permission issues or name collisions?

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822054#comment-15822054
 ] 

Larry McCay commented on HADOOP-13987:
--

[~jzhuge] - as I mentioned on the email thread, this is working as intended. In 
the very least we need documentation around this and possibly an improvement to 
also support core-site.xml with overrides from ssl-client.xml and 
ssl-server.xml by adding the appropriate ssl config to the central.

We need to make sure that this is tested well along with other scenarios that 
would otherwise be using the ssl only config and would now need to make sure 
that the central provider path works for everyone.


> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822042#comment-15822042
 ] 

John Zhuge commented on HADOOP-13987:
-

In the SSLFactory constructor, a new Configuration "sslConf" that only reads 
"ssl-client.xml" or "ssl-server.xml" is passed to FileBasedKeyStoresFactory 
which calls Configuration.getPassword() to initialize, but "sslConf" does not 
contain the property "hadoop.security.credential.provider.path" because it is 
usually added to "core-site.xml" or component site xml.

{code:title=SSLFactory(Mode mode, Configuration conf)}
Configuration sslConf = readSSLConfiguration(mode);
Class klass
  = conf.getClass(KEYSTORES_FACTORY_CLASS_KEY,
  FileBasedKeyStoresFactory.class, KeyStoresFactory.class);
keystoresFactory = ReflectionUtils.newInstance(klass, sslConf);
{code}

{code:title=Configuration readSSLConfiguration(Mode mode)}
Configuration sslConf = new Configuration(false);
sslConf.setBoolean(SSL_REQUIRE_CLIENT_CERT_KEY, requireClientCert);
String sslConfResource;
if (mode == Mode.CLIENT) {
  sslConfResource = conf.get(SSL_CLIENT_CONF_KEY,
  SSL_CLIENT_CONF_DEFAULT);
} else {
  sslConfResource = conf.get(SSL_SERVER_CONF_KEY,
  SSL_SERVER_CONF_DEFAULT);
}
sslConf.addResource(sslConfResource);
return sslConf;
{code}

Backtrace for "hadoop key list":
* getProviders:76, CredentialProviderFactory {org.apache.hadoop.security.alias}
* getPasswordFromCredentialProviders:2048, Configuration 
{org.apache.hadoop.conf}
* getPassword:2027, Configuration {org.apache.hadoop.conf}
* getPassword:240, FileBasedKeyStoresFactory {org.apache.hadoop.security.ssl}
* init:203, FileBasedKeyStoresFactory {org.apache.hadoop.security.ssl}
* init:187, SSLFactory {org.apache.hadoop.security.ssl}
* :442, KMSClientProvider {org.apache.hadoop.crypto.key.kms}
* createProvider:350, KMSClientProvider$Factory 
{org.apache.hadoop.crypto.key.kms}
* createProvider:341, KMSClientProvider$Factory 
{org.apache.hadoop.crypto.key.kms}
* get:96, KeyProviderFactory {org.apache.hadoop.crypto.key}
* getProviders:68, KeyProviderFactory {org.apache.hadoop.crypto.key}
* getKeyProvider:181, KeyShell$Command {org.apache.hadoop.crypto.key}
* validate:230, KeyShell$ListCommand {org.apache.hadoop.crypto.key}
* run:71, CommandShell {org.apache.hadoop.tools}
* run:76, ToolRunner {org.apache.hadoop.util}
* main:478, KeyShell {org.apache.hadoop.crypto.key}

SSLFactory is created by:
* LogLevel
* Fetcher
* KMSClientProvider (used by "hadoop key" command)
* URLConnectionFactory
* ShuffleHandler
* TimelineClientImpl
* DatanodeHttpServer

> Enhance SSLFactory support for Credential Provider
> --
>
> Key: HADOOP-13987
> URL: https://issues.apache.org/jira/browse/HADOOP-13987
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Testing CredentialProvider with KMS: populated the credentials file, added 
> "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
> list" failed due to incorrect password. So I added 
> "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key 
> list" worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13987) Enhance SSLFactory support for Credential Provider

2017-01-13 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13987:
---

 Summary: Enhance SSLFactory support for Credential Provider
 Key: HADOOP-13987
 URL: https://issues.apache.org/jira/browse/HADOOP-13987
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


Testing CredentialProvider with KMS: populated the credentials file, added 
"hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key 
list" failed due to incorrect password. So I added 
"hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key list" 
worked! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13986) UGI.UgiMetrics.renewalFailureTotal is not printable

2017-01-13 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13986:


 Summary: UGI.UgiMetrics.renewalFailureTotal is not printable
 Key: HADOOP-13986
 URL: https://issues.apache.org/jira/browse/HADOOP-13986
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0, 3.0.0-alpha2
Reporter: Wei-Chiu Chuang
Priority: Minor


The metrics (renewalFailures and renewalFailuresTotal) in the  following code 
snippet are not printable.
{code:title=UserGroupInformation.java}
metrics.renewalFailuresTotal.incr();
final long tgtEndTime = tgt.getEndTime().getTime();
LOG.warn("Exception encountered while running the renewal "
+ "command for {}. (TGT end time:{}, renewalFailures: {},"
+ "renewalFailuresTotal: {})", getUserName(), tgtEndTime,
metrics.renewalFailures, metrics.renewalFailuresTotal, ie);
{code}
The output of the code is like the following:
{quote}
2017-01-12 12:23:14,062 WARN  security.UserGroupInformation 
(UserGroupInformation.java:run(1012)) - Exception encountered while running the 
renewal command for f...@example.com. (TGT end time:148425260, 
renewalFailures: 
org.apache.hadoop.metrics2.lib.MutableGaugeInt@323aa7f9,renewalFailuresTotal: 
org.apache.hadoop.metrics2.lib.MutableGaugeLong@c8af058)
ExitCodeException exitCode=1: kinit: krb5_cc_get_principal: No credentials 
cache file found
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin -getAllServiceState option to get the HA state of all the NameNodes/ResourceManagers

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821903#comment-15821903
 ] 

Hadoop QA commented on HADOOP-13933:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} root: The patch generated 0 new + 145 unchanged - 3 
fixed = 145 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
3s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13933 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821887#comment-15821887
 ] 

Hadoop QA commented on HADOOP-9565:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HADOOP-9565 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-9565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12821320/HADOOP-9565-branch-2-007.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11432/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-01-13 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor reassigned HADOOP-9565:
-

Assignee: Thomas Demoor  (was: Pieter Reuse)

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821867#comment-15821867
 ] 

Hadoop QA commented on HADOOP-13650:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
32s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13650 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847330/HADOOP-13650-HADOOP-13345.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Updated] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13650:
---
Attachment: HADOOP-13650-HADOOP-13345.009.patch

Thanks for the detailed reviews, [~fabbri]

bq. Minor comment clarification: "@param create When using DynamoDB, create 
table if it does not exist"
Done

bq. What if create == true here?. 

Good catch. Fixed.

bq. Do we need to enforce that this FileSystem does not have a MetadataStore 
configured? 

Done.

bq. S3A's listFiles discovers non-empty directories

Thanks for catching this. The comments are outdated now. Since 
{{LocatedFileStatus}} erased the {{isEmptyDir}},  the code here is still valid 
I think.  I modified the comments.

bq. Should we add to dirCache here?

{{dirCache}} is used in {{putParentsIfNotPresent(child);}} after this 
statement. 

bq. Ahh. you enforce no MetadataStore here.. Should we move this up to 
initS3AFileSystem()?

Done.

Would you mind give another round of review? Much appriciate!


> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch, HADOOP-13650-HADOOP-13345.002.patch, 
> HADOOP-13650-HADOOP-13345.003.patch, HADOOP-13650-HADOOP-13345.004.patch, 
> HADOOP-13650-HADOOP-13345.005.patch, HADOOP-13650-HADOOP-13345.006.patch, 
> HADOOP-13650-HADOOP-13345.007.patch, HADOOP-13650-HADOOP-13345.008.patch, 
> HADOOP-13650-HADOOP-13345.009.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13959) S3guard: replace dynamo.describe() call in init with more efficient query

2017-01-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821743#comment-15821743
 ] 

Steve Loughran commented on HADOOP-13959:
-

The proposed version marker of HADOOP-13985 can be used here.

# if the request fails with no such entry: bad table.
# if the request fails for wrong  version: fail with message
# if the request fails for low-level reason: pass up

There's one risk here, transient failures during FS launch. This is something 
which has surfaced with S3A and bucket existence checks...I've concluded that 
it's hard to handle elegantly


> S3guard: replace dynamo.describe() call in init with more efficient query
> -
>
> Key: HADOOP-13959
> URL: https://issues.apache.org/jira/browse/HADOOP-13959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> HADOOP-13908 adds initialization when a table isn't created, using the 
> {{describe()}} call.
> AWS document this as inefficient, and throttle it. We should be able to get 
> away with a simple table lookup as the probe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13985) s3guard: add a version marker to every table

2017-01-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13985:
---

 Summary: s3guard: add a version marker to every table
 Key: HADOOP-13985
 URL: https://issues.apache.org/jira/browse/HADOOP-13985
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran


This is something else we need before any preview: a way to identify a table 
version, so that if future versions change the table structure:

* older clients can recognise that it's a newer format, and fail
* the future version can identify that it's an older format, and fail until 
some fsck-upgrade operation has taken place

I think something like a row on a path which is impossible in a real 
filesystem, such as "../VERSION" would allow a version marker to go in; the 
length field could be abused for the version number.

This field would be something that'd be checked in init(), so be the simple 
test for table existence we need for faster init



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13933) Add haadmin -getAllServiceState option to get the HA state of all the NameNodes/ResourceManagers

2017-01-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-13933:

Attachment: HADOOP-13933.006.patch

Thanks [~ajisakaa] for review.
Attached updated patch, pls review..

> Add haadmin -getAllServiceState option to get the HA state of all the 
> NameNodes/ResourceManagers
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HADOOP-13933.003.patch, HADOOP-13933.004.patch, HADOOP-13933.005.patch, 
> HADOOP-13933.006.patch, HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821694#comment-15821694
 ] 

Wei-Chiu Chuang commented on HADOOP-13956:
--

Good proposal, [~jzhuge]!
One thing to keep in mind is something similar to HADOOP-12846, HADOOP-13548 
may pop up.

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13984) Cannot force uppercase through auth_to_local

2017-01-13 Thread Pierre Sauvage (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Sauvage updated HADOOP-13984:

Description: 
It is possible, with /L, to force lowercase in auth_to_local mapping process 
(e.g. LOWER@HADOOP.DOMAIN -> lower).
But the opposite, using /U, is not possible
(e.g upper@HADOOP.DOMAIN -> UPPER)
The ability of forcing uppercase is important when using Active Directory with 
cross-realm trust since Active Directory is case insensitive (you can both 
kinit foo...@ad.domain.com and foo...@ad.domain.com).

  was:
It is possible, with /L, to force lowercase in auth_to_local mapping process 
(e.g. LOWER@HADOOP.DOMAIN -> lower).
But the opposite, using /U, is not possible
(e.g upper@HADOOP.DOMAIN -> UPPER)



> Cannot force uppercase through auth_to_local
> 
>
> Key: HADOOP-13984
> URL: https://issues.apache.org/jira/browse/HADOOP-13984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Pierre Sauvage
>
> It is possible, with /L, to force lowercase in auth_to_local mapping process 
> (e.g. LOWER@HADOOP.DOMAIN -> lower).
> But the opposite, using /U, is not possible
> (e.g upper@HADOOP.DOMAIN -> UPPER)
> The ability of forcing uppercase is important when using Active Directory 
> with cross-realm trust since Active Directory is case insensitive (you can 
> both kinit foo...@ad.domain.com and foo...@ad.domain.com).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13984) Cannot force uppercase through auth_to_local

2017-01-13 Thread Pierre Sauvage (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Sauvage updated HADOOP-13984:

Description: 
It is possible, with /L, to force lowercase in auth_to_local mapping process 
(e.g. LOWER@HADOOP.DOMAIN -> lower).
But the opposite, using /U, is not possible
(e.g upper@HADOOP.DOMAIN -> UPPER)


  was:
It is possible, with /L, to force lowercase in auth_to_local mapping process 
(e.g. LOWER@HADOOP.DOMAIN -> lower).
But the opposite is not possible
(e.g upper@HADOOP.DOMAIN -> UPPER)



> Cannot force uppercase through auth_to_local
> 
>
> Key: HADOOP-13984
> URL: https://issues.apache.org/jira/browse/HADOOP-13984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Pierre Sauvage
>
> It is possible, with /L, to force lowercase in auth_to_local mapping process 
> (e.g. LOWER@HADOOP.DOMAIN -> lower).
> But the opposite, using /U, is not possible
> (e.g upper@HADOOP.DOMAIN -> UPPER)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13984) Cannot force uppercase through auth_to_local

2017-01-13 Thread Pierre Sauvage (JIRA)
Pierre Sauvage created HADOOP-13984:
---

 Summary: Cannot force uppercase through auth_to_local
 Key: HADOOP-13984
 URL: https://issues.apache.org/jira/browse/HADOOP-13984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Pierre Sauvage


It is possible, with /L, to force lowercase in auth_to_local mapping process 
(e.g. LOWER@HADOOP.DOMAIN -> lower).
But the opposite is not possible
(e.g upper@HADOOP.DOMAIN -> UPPER)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13933) Add haadmin -getAllServiceState option to get the HA state of all the NameNodes/ResourceManagers

2017-01-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821513#comment-15821513
 ] 

Akira Ajisaka commented on HADOOP-13933:


Mostly looks good to me. I built an HA cluster and verified the new option.
Would you update HDFSHighAvailabilityWithNFS.md and 
HDFSHighAvailabilityWithQJM.md as well? I'm +1 if that is addressed.

> Add haadmin -getAllServiceState option to get the HA state of all the 
> NameNodes/ResourceManagers
> 
>
> Key: HADOOP-13933
> URL: https://issues.apache.org/jira/browse/HADOOP-13933
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-13933.002.patch, HADOOP-13933.003.patch, 
> HADOOP-13933.003.patch, HADOOP-13933.004.patch, HADOOP-13933.005.patch, 
> HDFS-9559.01.patch
>
>
> Currently we have one command to get state of namenode.
> {code}
> ./hdfs haadmin -getServiceState 
> {code}
> It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13975) Allow DistCp to use MultiThreadedMapper

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821453#comment-15821453
 ] 

Hadoop QA commented on HADOOP-13975:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 103 unchanged - 1 fixed = 104 total (was 104) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 43s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestOptionsParser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847167/HADOOP-distcp-multithreaded-mapper-trunk.4.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aafc293a1810 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f344e0 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11427/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11427/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11427/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11427/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow DistCp to use MultiThreadedMapper

[jira] [Commented] (HADOOP-13589) S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.

2017-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821441#comment-15821441
 ] 

Hadoop QA commented on HADOOP-13589:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13589 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847192/HADOOP-13589-HADOOP-13345-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux c6978c85c790 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 2220b78 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11429/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11429/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11429/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.
> ---
>
> Key: HADOOP-13589
> URL: