[jira] [Commented] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651158#comment-16651158
 ] 

Hadoop QA commented on HADOOP-15856:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944051/HADOOP-15856-01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 003a2b230e6d 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15372/testReport/ |
| Max. process+thread count | 337 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15372/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} 

[jira] [Updated] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-15856:
---
Attachment: HADOOP-15856-01.patch

> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-15856:
---
Status: Patch Available  (was: Open)

> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HADOOP-15856:
--

Assignee: Vinayakumar B

> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-15 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-15856:
--

 Summary: Trunk build fails to compile native on Windows
 Key: HADOOP-15856
 URL: https://issues.apache.org/jira/browse/HADOOP-15856
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Vinayakumar B


After removal of {{javah}} dependency in HADOOP-15767
Trunk build fails with unable to find JNI headers.

HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651104#comment-16651104
 ] 

Vishwajeet Dusane commented on HADOOP-15851:


Thank you [~ste...@apache.org]

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances

2018-10-15 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651073#comment-16651073
 ] 

Xiao Chen commented on HADOOP-14445:


Forgot to mention: addendum patch committed to relevant branches (trunk, 
branch-3.[0-2]). Thanks again Daryn.

Branch-2 is still good to have, but I fear I'll be preempted at least for this 
week.

> Use DelegationTokenIssuer to create KMS delegation tokens that can 
> authenticate to all KMS instances
> 
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, 
> HADOOP-14445.18.patch, HADOOP-14445.19.patch, HADOOP-14445.20.patch, 
> HADOOP-14445.addemdum.patch, HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.branch-3.0.001.patch, HADOOP-14445.compat.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650982#comment-16650982
 ] 

Hadoop QA commented on HADOOP-14556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 38 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 11s{color} 
| {color:red} root generated 1 new + 1326 unchanged - 1 fixed = 1327 total (was 
1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 27s{color} | {color:orange} root: The patch generated 20 new + 168 unchanged 
- 6 fixed = 188 total (was 174) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 128 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
3s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
58s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
26s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | 

[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650972#comment-16650972
 ] 

Larry McCay commented on HADOOP-15855:
--

{code}

+To wrap a filesystem URIs with a `jceks` URI follow the following steps: + +1. 
Take a filesystem URI such as `hdfs://namenode:9001/users/alice/secrets.jceks` 
+1. Place `jceks://` in front of the URL: 
`jceks://hdfs://namenode:9001/users/alice/secrets.jceks` +1. Replace the second 
`://` string with an `@` symbol: 
`jceks://hdfs@namenode:9001/users/alice/secrets.jceks` +

{code}

s/a filesystem URIs/filesystem URIs/

{code}

It is also limited to PKI keypairs.

{code}

The above needs to be reverified with modern JDK versions of keytool.

{code}

Editors will not review the secrets stored within the keystore, nor will `cat`, 
`more` or any other standard tools. This is why the keystore providers are 
better than "side file" storage of credentials.

{code}

s/will not review/will not reveal/

Otherwise, looks good to me!

 

> Review hadoop credential doc, including object store details
> 
>
> Key: HADOOP-15855
> URL: https://issues.apache.org/jira/browse/HADOOP-15855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15855-001.patch
>
>
> I've got some changes to make to the hadoop credentials API doc; some minor 
> editing and examples of credential paths in object stores with some extra 
> details (i.e how you can't refer to a store from the same store URI)
> these examples need to come with unit tests to verify that the examples are 
> correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650965#comment-16650965
 ] 

Hadoop QA commented on HADOOP-15855:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 13 unchanged - 3 fixed = 13 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15855 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944013/HADOOP-15855-001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  compile  javac  javadoc  
mvninstall  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8f5340cedd64 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7fe1a40 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15371/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15371/testReport/ |
| Max. process+thread count | 1391 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-15854) AuthToken Use StringBuilder instead of StringBuffer

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650962#comment-16650962
 ] 

Hadoop QA commented on HADOOP-15854:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15854 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944011/HADOOP-15854.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 69654aabd68e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ef9dc6c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15370/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15370/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AuthToken Use StringBuilder instead 

[jira] [Commented] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650916#comment-16650916
 ] 

Hudson commented on HADOOP-15853:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15221/])
HADOOP-15853. TestConfigurationDeprecation leaves behind a temp file, (rkanter: 
rev 7fe1a40a6ba692ce5907b96db3a7cb3639c091bd)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java


> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650881#comment-16650881
 ] 

Hudson commented on HADOOP-15851:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15220 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15220/])
HADOOP-15851. Disable wildfly logs to the console. Contributed by (stevel: rev 
ef9dc6c44c686e836bb25e31ff355cff80572d23)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/SSLSocketFactoryEx.java


> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #427: YARN-6636. Add basic fairscheduler nodelabel suppo...

2018-10-15 Thread bschell
GitHub user bschell opened a pull request:

https://github.com/apache/hadoop/pull/427

YARN-6636. Add basic fairscheduler nodelabel support.

Supports unfair label scheduling and non-exclusive node labels.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bschell/hadoop newlabels

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/427.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #427


commit 6f1d5896f8e8819910446cd8b4a7cc2ae2a84601
Author: Brandon Scheller 
Date:   2018-09-27T22:37:36Z

Add fairscheduler nodelabel support

Supports unfair label scheduling and non-exclusive node labels




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15853:
---
Fix Version/s: 3.3.0

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15853:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn].  Committed to trunk!

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650869#comment-16650869
 ] 

Steve Loughran commented on HADOOP-15855:
-

Patch 001. 
* backquote more text into `code-format`
* review general prose and tune a bit
* remove forward reference to code examples which aren't in the doc
* examples of refs of object stores, including wasb  and abfs refs with @ in 
their URI already
* add unit tests for the new examples to check their validity

updated doc [is visible on 
github}https://github.com/steveloughran/hadoop/blob/filesystem/HADOOP-15855-cred-docs/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md]

+[~lmccay]

> Review hadoop credential doc, including object store details
> 
>
> Key: HADOOP-15855
> URL: https://issues.apache.org/jira/browse/HADOOP-15855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15855-001.patch
>
>
> I've got some changes to make to the hadoop credentials API doc; some minor 
> editing and examples of credential paths in object stores with some extra 
> details (i.e how you can't refer to a store from the same store URI)
> these examples need to come with unit tests to verify that the examples are 
> correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15855:

Attachment: HADOOP-15855-001.patch

> Review hadoop credential doc, including object store details
> 
>
> Key: HADOOP-15855
> URL: https://issues.apache.org/jira/browse/HADOOP-15855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15855-001.patch
>
>
> I've got some changes to make to the hadoop credentials API doc; some minor 
> editing and examples of credential paths in object stores with some extra 
> details (i.e how you can't refer to a store from the same store URI)
> these examples need to come with unit tests to verify that the examples are 
> correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15855:

Status: Patch Available  (was: Open)

> Review hadoop credential doc, including object store details
> 
>
> Key: HADOOP-15855
> URL: https://issues.apache.org/jira/browse/HADOOP-15855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15855-001.patch
>
>
> I've got some changes to make to the hadoop credentials API doc; some minor 
> editing and examples of credential paths in object stores with some extra 
> details (i.e how you can't refer to a store from the same store URI)
> these examples need to come with unit tests to verify that the examples are 
> correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15855:
---

 Summary: Review hadoop credential doc, including object store 
details
 Key: HADOOP-15855
 URL: https://issues.apache.org/jira/browse/HADOOP-15855
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, security
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I've got some changes to make to the hadoop credentials API doc; some minor 
editing and examples of credential paths in object stores with some extra 
details (i.e how you can't refer to a store from the same store URI)

these examples need to come with unit tests to verify that the examples are 
correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650857#comment-16650857
 ] 

Robert Kanter commented on HADOOP-15853:


+1 LGTM

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15854) AuthToken Use StringBuilder instead of StringBuffer

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HADOOP-15854:


Assignee: BELUGA BEHR

> AuthToken Use StringBuilder instead of StringBuffer
> ---
>
> Key: HADOOP-15854
> URL: https://issues.apache.org/jira/browse/HADOOP-15854
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-15854.1.patch
>
>
> Use {{StringBuilder}} instead of {{StringBuffer}} because {{StringBuilder}} 
> is not synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15851:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1, committed -thanks!

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15854) AuthToken Use StringBuilder instead of StringBuffer

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15854:
-
Attachment: HADOOP-15854.1.patch

> AuthToken Use StringBuilder instead of StringBuffer
> ---
>
> Key: HADOOP-15854
> URL: https://issues.apache.org/jira/browse/HADOOP-15854
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-15854.1.patch
>
>
> Use {{StringBuilder}} instead of {{StringBuffer}} because {{StringBuilder}} 
> is not synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15854) AuthToken Use StringBuilder instead of StringBuffer

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15854:
-
Status: Patch Available  (was: Open)

> AuthToken Use StringBuilder instead of StringBuffer
> ---
>
> Key: HADOOP-15854
> URL: https://issues.apache.org/jira/browse/HADOOP-15854
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-15854.1.patch
>
>
> Use {{StringBuilder}} instead of {{StringBuffer}} because {{StringBuilder}} 
> is not synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15854) AuthToken Use StringBuilder instead of StringBuffer

2018-10-15 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-15854:


 Summary: AuthToken Use StringBuilder instead of StringBuffer
 Key: HADOOP-15854
 URL: https://issues.apache.org/jira/browse/HADOOP-15854
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auth
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HADOOP-15854.1.patch

Use {{StringBuilder}} instead of {{StringBuffer}} because {{StringBuilder}} is 
not synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650850#comment-16650850
 ] 

Steve Loughran edited comment on HADOOP-14556 at 10/15/18 9:51 PM:
---

Note that the partially {{ITestDelegatedMRJob}} test does show that S3A tokens 
are picked up for MR job submit; tested for full, session and role tokens.

One fun detail: if your fs.s3a.secret.key  attributes are set in the job conf 
you launch with, they end up at the far end, even though you are using DTs. 
Why? well, because they are config options, aren't they?

To get the lockdown to work, you need to be serving up the secrets inside a 
hadoop credential provider file such as  localjceks file. That way, the job 
conf will not contain the secrets.

There's no obvious way to patch the options, so that's going to have to go down 
as what to do. Setting the AWS env vars would also work, though as spark 
automatically picks up those values and patches the fs config (without any 
check for the properties first), they may get in.


was (Author: ste...@apache.org):
Note that the partially {{ITestDelegatedMRJob}} test does show that S3A tokens 
are picked up for MR job submit; tested for full, session and role tokens.

One fun detail: if your fs.s3a.secret.key  attributes are set in the job conf 
you launch with, they end up at the far end, even though you are using DTs. 
Why? well, because they are config options, aren't they?

To get the lockdown to work, you need to be serving up the secrets inside a 
hadoop credential provider file such as  localjceks file. That way, the job 
conf will not contain the secrets.
There's no 

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650850#comment-16650850
 ] 

Steve Loughran commented on HADOOP-14556:
-

Note that the partially {{ITestDelegatedMRJob}} test does show that S3A tokens 
are picked up for MR job submit; tested for full, session and role tokens.

One fun detail: if your fs.s3a.secret.key  attributes are set in the job conf 
you launch with, they end up at the far end, even though you are using DTs. 
Why? well, because they are config options, aren't they?

To get the lockdown to work, you need to be serving up the secrets inside a 
hadoop credential provider file such as  localjceks file. That way, the job 
conf will not contain the secrets.
There's no 

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650831#comment-16650831
 ] 

Steve Loughran edited comment on HADOOP-14556 at 10/15/18 9:45 PM:
---

HADOOP-14556 patch 013
* ITestDelegatedMRJob mixes a mock job submission API with a real miniYarn 
cluster to verify that MR job submission collects DTs for source and 
destination paths.
  To do this the MockJob class had to go into 
hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java and 
job.connect() made an override point (so it can be skipped)
* default assumed role duration returned to 1h; it had been extended to 6h but 
that only works if your role has been explicitly extended to > 1h duration.
* and docs on increasing it (plus error messages you get if you don't) 
improved/extended in assumed_roles.md as well as delegation_tokens.md.
 All AWS error messages related to STS/session and role requests are now in 
assumed_roles.md to avoid duplication & inconsistencies.
* ITestS3ADelegationTokenSupport tests that the Session DT binding will forward 
any session creds it gets from its own auth chain, rather than ask for new ones 
(which it can't do with session creds)
* Also: I'm using a Hadoop cred provider for storing secrets; this broke the 
AssumeRole and delegation tests which were clearing or overwriting the 
fs.s3a.{auth, secret, session} options, as those in the creds file were still 
being picked up. Fix: explicitly reset hadoop.security.credential.provider.path 
for all the tests which were now failing.
* minor checkstyle fixup

tested, S3A ireland. Apart from the cred problem (fixed), I got a failure of 
{{ITestS3GuardToolLocal\#testDestroyNoBucket}} *even when I was running with 
dynamodb*. I think that test suite is running when it shouldn't. More research 
needed there


was (Author: ste...@apache.org):
HADOOP-14556 patch 013
* ITestDelegatedMRJob mixes a mock job submission API with a real miniYarn 
cluster to verify that MR job submission collects DTs for source and 
destination paths.
  To do this the MockJob class had to go into 
hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java and 
job.connect() made an override point (so it can be skipped)
* default assumed role duration returned to 1h; it had been extended to 6h but 
that only works if your role has been explicitly extended to > 1h duration.
* and docs on increasing it (plus error messages you get if you don't) 
improved/extended in assumed_roles.md as well as delegation_tokens.md.
 All AWS error messages related to STS/session and role requests are now in 
assumed_roles.md to avoid duplication & inconsistencies.
* ITestS3ADelegationTokenSupport tests that the Session DT binding will forward 
any session creds it gets from its own auth chain, rather than ask for new ones 
(which it can't do with session creds)
* Also: I'm using a Hadoop cred provider for storing secrets; this broke the 
AssumeRole and delegation tests which were clearing or overwriting the 
fs.s3a.{auth, secret, session} options, as those in the creds file were still 
being picked up. Fix: explicitly reset hadoop.security.credential.provider.path 
for all the tests which were now failing.
* minor checkstyle fixup

tested, S3A ireland. Apart from the cred problem (fixed), I got a failure of 
{{ITestS3GuardToolLocal\#testDestroyNoBucket }} *even when I was running with 
dynamodb*. I think that test suite is running when it shouldn't. More research 
needed there

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to 

[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650831#comment-16650831
 ] 

Steve Loughran commented on HADOOP-14556:
-

HADOOP-14556 patch 013
* ITestDelegatedMRJob mixes a mock job submission API with a real miniYarn 
cluster to verify that MR job submission collects DTs for source and 
destination paths.
  To do this the MockJob class had to go into 
hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java and 
job.connect() made an override point (so it can be skipped)
* default assumed role duration returned to 1h; it had been extended to 6h but 
that only works if your role has been explicitly extended to > 1h duration.
* and docs on increasing it (plus error messages you get if you don't) 
improved/extended in assumed_roles.md as well as delegation_tokens.md.
 All AWS error messages related to STS/session and role requests are now in 
assumed_roles.md to avoid duplication & inconsistencies.
* ITestS3ADelegationTokenSupport tests that the Session DT binding will forward 
any session creds it gets from its own auth chain, rather than ask for new ones 
(which it can't do with session creds)
* Also: I'm using a Hadoop cred provider for storing secrets; this broke the 
AssumeRole and delegation tests which were clearing or overwriting the 
fs.s3a.{auth, secret, session} options, as those in the creds file were still 
being picked up. Fix: explicitly reset hadoop.security.credential.provider.path 
for all the tests which were now failing.
* minor checkstyle fixup

tested, S3A ireland. Apart from the cred problem (fixed), I got a failure of 
{{ITestS3GuardToolLocal\#testDestroyNoBucket }} *even when I was running with 
dynamodb*. I think that test suite is running when it shouldn't. More research 
needed there

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650829#comment-16650829
 ] 

Hadoop QA commented on HADOOP-15853:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943996/HADOOP-15853-01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 55a7f3f7d298 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e13a38f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15368/testReport/ |
| Max. process+thread count | 1412 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15368/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 

[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Open  (was: Patch Available)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-013.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650818#comment-16650818
 ] 

Hadoop QA commented on HADOOP-15852:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 11 unchanged - 1 fixed = 20 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943993/HADOOP-15852.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 988d3b4fe729 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e13a38f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15367/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15367/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15367/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 

[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Description: 
My new mission is to remove instances of {{StringBuffer}} in favor of 
{{StringBuilder}}.

* Simplify Code
* Use Eclipse to generate hashcode/equals
* User StringBuilder instead of StringBuffer

  was:
My new mission is to remove {{StringBuffer}}s in favor of {{StringBuilder}}.

* Simplify Code
* Use Eclipse to generate hashcode/equals
* User StringBuilder instead of StringBuffer


> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned HADOOP-15853:
--

Assignee: Ayush Saxena

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650703#comment-16650703
 ] 

Ayush Saxena commented on HADOOP-15853:
---

Thanx [~rkanter] for putting this up.
Have uploaded the patch with the fix. :)

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-15853:
--
Status: Patch Available  (was: Open)

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-15853:
--
Attachment: HADOOP-15853-01.patch

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15853-01.patch
>
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15853:
---
Labels: newbie  (was: )

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Priority: Major
>  Labels: newbie
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15708) Reading values from Configuration before adding deprecations make it impossible to read value with deprecated key

2018-10-15 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650688#comment-16650688
 ] 

Robert Kanter commented on HADOOP-15708:


Looks like we missed a trivial problem where one of the updated tests leaves 
behind a file, which causes test runs to complain about a wrong license.  I've 
filed HADOOP-15853 to fix that.

> Reading values from Configuration before adding deprecations make it 
> impossible to read value with deprecated key
> -
>
> Key: HADOOP-15708
> URL: https://issues.apache.org/jira/browse/HADOOP-15708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15708-testcase.patch, HADOOP-15708.001.patch, 
> HADOOP-15708.002.patch, HADOOP-15708.003.patch, HADOOP-15708.004.patch
>
>
> Hadoop Common contains a widely used Configuration class.
>  This class can handle deprecations of properties, e.g. if property 'A' gets 
> deprecated with an alternative property key 'B', users can access property 
> values with keys 'A' and 'B'.
>  Unfortunately, this does not work in one case.
>  When a config file is specified (for instance, XML) and a property is read 
> with the config.get() method, the config is loaded from the file at this 
> time. 
>  If the deprecation mapping is not yet specified by the time any config value 
> is retrieved and the XML config refers to a deprecated key, then the 
> deprecation mapping specified, the config value cannot be retrieved neither 
> with the deprecated nor with the new key.
>  The attached patch contains a testcase that reproduces this wrong behavior.
> Here are the steps outlined what the testcase does:
>  1. Creates an XML config file with a deprecated property
>  2. Adds the config to the Configuration object
>  3. Retrieves the config with its deprecated key (it does not really matter 
> which property the user gets, could be any)
>  4. Specifies the deprecation rules including the one defined in the config
>  5. Prints and asserts the property retrieved from the config with both the 
> deprecated and the new property keys.
> For reference, here is the log of one execution that actually shows what the 
> issue is:
> {noformat}
> Loaded items: 1
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name yarn.resourcemanager.zk-address: 
> dummyZkAddress
> Contents of config file: [, , 
> yarn.resourcemanager.zk-addressdummyZkAddress,
>  ]
> Looked up property value with name hadoop.zk.address: null
> 2018-08-31 10:10:06,484 INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1397)) - yarn.resourcemanager.zk-address 
> is deprecated. Instead, use hadoop.zk.address
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name hadoop.zk.address: null
> java.lang.AssertionError: 
> Expected :dummyZkAddress
> Actual   :null
> {noformat}
> *As it's visible from the output and the code, the issue is really that if 
> the config is retrieved either with the deprecated or the new value, 
> Configuration both wants to serve the value with the new key.*
>  *If the mapping is not specified before any retrieval happened, the value is 
> only stored under the deprecated key but not the new key.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15853:
---
Issue Type: Bug  (was: Improvement)

> TestConfigurationDeprecation leaves behind a temp file, resulting in a 
> license issue
> 
>
> Key: HADOOP-15853
> URL: https://issues.apache.org/jira/browse/HADOOP-15853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Priority: Major
>
> HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
> them was adding
> {code:java}
>   final static String CONFIG4 = new File("./test-config4" +
>   "-TestConfigurationDeprecation.xml").getAbsolutePath();
> {code}
> which we never clean up in the {{tearDown}} method:
> {code:java}
>   @After
>   public void tearDown() throws Exception {
> new File(CONFIG).delete();
> new File(CONFIG2).delete();
> new File(CONFIG3).delete();
>   }
> {code}
> This results in that file being left behind, and causing a license warning in 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15853) TestConfigurationDeprecation leaves behind a temp file, resulting in a license issue

2018-10-15 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-15853:
--

 Summary: TestConfigurationDeprecation leaves behind a temp file, 
resulting in a license issue
 Key: HADOOP-15853
 URL: https://issues.apache.org/jira/browse/HADOOP-15853
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.3.0
Reporter: Robert Kanter


HADOOP-15708 made some changes to {{TestConfigurationDeprecation}}.  One of 
them was adding
{code:java}
  final static String CONFIG4 = new File("./test-config4" +
  "-TestConfigurationDeprecation.xml").getAbsolutePath();
{code}
which we never clean up in the {{tearDown}} method:
{code:java}
  @After
  public void tearDown() throws Exception {
new File(CONFIG).delete();
new File(CONFIG2).delete();
new File(CONFIG3).delete();
  }
{code}

This results in that file being left behind, and causing a license warning in 
test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HADOOP-15852:


Assignee: BELUGA BEHR

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch
>
>
> My new mission is to remove {{StringBuffer}}s in favor of {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Status: Patch Available  (was: Open)

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch
>
>
> My new mission is to remove {{StringBuffer}}s in favor of {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Attachment: HADOOP-15852.1.patch

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch
>
>
> My new mission is to remove {{StringBuffer}}s in favor of {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15852) QuotaUsage Review

2018-10-15 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-15852:


 Summary: QuotaUsage Review
 Key: HADOOP-15852
 URL: https://issues.apache.org/jira/browse/HADOOP-15852
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HADOOP-15852.1.patch

My new mission is to remove {{StringBuffer}}s in favor of {{StringBuilder}}.

* Simplify Code
* Use Eclipse to generate hashcode/equals
* User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances

2018-10-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650564#comment-16650564
 ] 

Hudson commented on HADOOP-14445:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15218 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15218/])
HADOOP-14445. Addendum: Use DelegationTokenIssuer to create KMS (xiao: rev 
b6fc72a0250ac3f2341ebe8a14d19b073e6224c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderTokenIssuer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/org/apache/hadoop/security/token/DelegationTokenIssuer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> Use DelegationTokenIssuer to create KMS delegation tokens that can 
> authenticate to all KMS instances
> 
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, 
> HADOOP-14445.18.patch, HADOOP-14445.19.patch, HADOOP-14445.20.patch, 
> HADOOP-14445.addemdum.patch, HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.branch-3.0.001.patch, HADOOP-14445.compat.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650485#comment-16650485
 ] 

Íñigo Goiri commented on HADOOP-15821:
--

Any further comments on  [^HADOOP-15821.009.patch]?
The remaining issues would be to generalize YarnRegistryAttributes which we 
would do in a separate JIRA, right?
I'm not 100% sure the web site will be properly generated in terms of the table 
of contents though.

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch, HADOOP-15821.009.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15828) Review of MachineList class

2018-10-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650455#comment-16650455
 ] 

Íñigo Goiri commented on HADOOP-15828:
--

The refactor of the constructor is a little tough to grasp, help me out here.
So basically instead of using null lists we use empty ones; that's good as we 
simplify other parts of the code.
My question comes for the old else case where we used to set the all to false 
and the rest to null; how is that covered now?

BTW, what are we now covering by adding host2 that we weren't before?

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, 
> HADOOP-15828.3.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances

2018-10-15 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650427#comment-16650427
 ] 

Daryn Sharp commented on HADOOP-14445:
--

+1. Just take out the now unused import in TestEncryptionZones.  Surprised we 
both missed the doubled up class...

> Use DelegationTokenIssuer to create KMS delegation tokens that can 
> authenticate to all KMS instances
> 
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, 
> HADOOP-14445.18.patch, HADOOP-14445.19.patch, HADOOP-14445.20.patch, 
> HADOOP-14445.addemdum.patch, HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.branch-3.0.001.patch, HADOOP-14445.compat.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650391#comment-16650391
 ] 

Vishwajeet Dusane commented on HADOOP-15851:


Thank you [~ste...@apache.org] - I ran the test hadoop-azure test. All test are 
running except a couple of test failing with/without this patch.

 
{code:java}
[ERROR] Errors:
[ERROR] 
ITestAzureBlobFileSystemRandomRead.testRandomRead:112->verifyConsistentReads:580
 » IO
[ERROR] 
ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance:429->randomRead:498
 » IO
[INFO]
[ERROR] Tests run: 307, Failures: 0, Errors: 2, Skipped: 197

{code}
Investigating on the failure, based on the initial analysis, will raise 
separate Jira.

Failure is with/without this patch. So this patch is clean and has not 
introduced any issues.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15828) Review of MachineList class

2018-10-15 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650331#comment-16650331
 ] 

BELUGA BEHR commented on HADOOP-15828:
--

[~elgoiri] Please review :)

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, 
> HADOOP-15828.3.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) Allow CopyCommitter to skip concatenating source files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15850:

Component/s: tools/distcp

> Allow CopyCommitter to skip concatenating source files specified by 
> DistCpConstants.CONF_LABEL_LISTING_FILE_PATH
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.
> There should be a way for DistCp to specify the skipping of source file 
> concatenation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) Allow CopyCommitter to skip concatenating source files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15850:

Affects Version/s: 3.1.1

> Allow CopyCommitter to skip concatenating source files specified by 
> DistCpConstants.CONF_LABEL_LISTING_FILE_PATH
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.
> There should be a way for DistCp to specify the skipping of source file 
> concatenation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15824) RawLocalFileSystem initialize() raises Null Pointer Exception

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650156#comment-16650156
 ] 

Steve Loughran commented on HADOOP-15824:
-

bq. The working directory is defined by getInitialWorkingDirectory which is 
qualifying the user.dir path against getWorkingDirectory().  Which obvious 
isn't set yet and is null...

oh, that's funny

prebumably sbt is doing something about test process setup which it shouldn't

> RawLocalFileSystem initialize() raises Null Pointer Exception
> -
>
> Key: HADOOP-15824
> URL: https://issues.apache.org/jira/browse/HADOOP-15824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.3, 2.8.4, 3.1.1
> Environment: Hadoop 2.8.4 + Spark & yarn client launch
>Reporter: Tank Sui
>Priority: Minor
>
> {code:java}
> [ERROR]09:33:13.143 [main] org.apache.spark.SparkContext - Error initializing 
> SparkContext.
> 10/6/2018 5:33:13 PM java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:351)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:649)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:500)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:126)
> 10/6/2018 5:33:13 PM  at services.SparkService.tryInit(SparkService.scala:49)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController.(DataController.scala:38)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController$$FastClassByGuice$$9ed55d7d.newInstance()
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:111)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> 

[jira] [Commented] (HADOOP-15841) ABFS: change createRemoteFileSystemDuringInitialization default to true

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650155#comment-16650155
 ] 

Steve Loughran commented on HADOOP-15841:
-

=0 on this; we dont' do it for other stores

if/when we add a proper object store CLI entry point then this could be one of 
the ops to provide

> ABFS: change createRemoteFileSystemDuringInitialization default to true
> ---
>
> Key: HADOOP-15841
> URL: https://issues.apache.org/jira/browse/HADOOP-15841
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I haven't seen a way to create a working container (at least for the dfs 
> endpoint) except for setting 
> fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't 
> see that much of a downside to having it default to true, and it's a mild 
> inconvenience to remember to set it to true for some action to create a 
> container. I vaguely recall [~tmarquardt] considering changing this default 
> too.
> I propose we do it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650124#comment-16650124
 ] 

Hadoop QA commented on HADOOP-14556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
20s{color} | {color:green} root generated 0 new + 1326 unchanged - 1 fixed = 
1326 total (was 1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 15 new + 112 unchanged 
- 6 fixed = 127 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 110 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
3s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
39s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650104#comment-16650104
 ] 

Steve Loughran commented on HADOOP-15851:
-

Patch looks OK.

As with the usual object-store-due-diligence process, which ABFS location have 
you run the entire hadoop-aws abfs integration tests against?

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650013#comment-16650013
 ] 

Hadoop QA commented on HADOOP-15851:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15851 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943897/HADOOP-15851-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 27bc693d075d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5033deb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15365/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15365/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Disable wildfly logs to the console
> 

[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

Patch 012; checkstyle and weekly update patch

* adding options to core-default.xml
* address previous patch javadoc issues

main change is that the session token will lift and forward any existing 
session credentials its auth chain provides. The standard DT login chain is 
"simple" (full keys in config options) and env vars, but if the env vars are 
session vars or the chain is configured to use Temporary credentials then those 
creds are marshalled into the DT *after a warning is logged*

the warning & docs cover a limitation of forwarding: the token life is now that 
of the existing credentials, which we don't know. But: it allows people who 
only have session creds (e.g. issued by 2FA) to pass them on as DTs.

role DTs don't handle this: you can't call STS.assumeRole with session tokens

TODO
* add a test for session credential forwarding
* salvage something from the MR test which uses a mock yarn client for job 
submit, so avoids the challenge of getting a secure mini yarn cluster up.
* only log that forwarding once

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-15 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-012.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-15851:
---
Status: Patch Available  (was: Open)

[~ste...@apache.org] - Could you please take a look at this patch? This patch 
is verified manually.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-15851:
---
Attachment: HADOOP-15851-001.patch

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15851-001.patch
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649914#comment-16649914
 ] 

Vishwajeet Dusane edited comment on HADOOP-15851 at 10/15/18 8:57 AM:
--

Wildfly ssl and it is logging its [version information - source code 
link|https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196].
 Disabling these log message when successfully loaded openssl permanently would 
be useful.


was (Author: vishwajeet.dusane):
Wildfly ssl and it is logging its [version information - source code 
link|[https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196].]
 Disabling these log message when successfully loaded openssl permanently would 
be useful.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649914#comment-16649914
 ] 

Vishwajeet Dusane commented on HADOOP-15851:


Wildfly ssl and it is logging its [version information - source code 
link|[https://github.com/wildfly/wildfly-openssl/blob/ace72ba07d0c746b6eb46635f4a8b122846c47c8/java/src/main/java/org/wildfly/openssl/SSL.java#L196].]
 Disabling these log message when successfully loaded openssl permanently would 
be useful.

> Disable wildfly logs to the console
> ---
>
> Key: HADOOP-15851
> URL: https://issues.apache.org/jira/browse/HADOOP-15851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.2.0
>
>
> On loading OpenSSL library successfully, Wildfly logging messages like below
> {code:java}
> Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
> INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
> {code}
> These messages may fiddle with existing scripts which parses logs with a 
> predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15851) Disable wildfly logs to the console

2018-10-15 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HADOOP-15851:
--

 Summary: Disable wildfly logs to the console
 Key: HADOOP-15851
 URL: https://issues.apache.org/jira/browse/HADOOP-15851
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/azure
Reporter: Vishwajeet Dusane
Assignee: Vishwajeet Dusane
 Fix For: 3.2.0


On loading OpenSSL library successfully, Wildfly logging messages like below
{code:java}
Oct 15, 2018 6:47:24 AM org.wildfly.openssl.SSL init
INFO: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.0g 2 Nov 2017
{code}
These messages may fiddle with existing scripts which parses logs with a 
predefined schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances

2018-10-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649830#comment-16649830
 ] 

Hadoop QA commented on HADOOP-14445:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 1 new + 391 unchanged 
- 2 fixed = 392 total (was 393) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}226m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943866/HADOOP-14445.addemdum.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c2567fb2cf0 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |