[jira] [Commented] (HADOOP-15335) Support xxxxxxx:xxx/stacks print lock info and more useful attribute of thread info

2018-04-10 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433396#comment-16433396
 ] 

maobaolong commented on HADOOP-15335:
-

[~yiran] What a great improvement. This can help us to find lock ownership 
information. 

+1.

> Support xxx:xxx/stacks print lock info and more useful attribute of 
> thread info
> ---
>
> Key: HADOOP-15335
> URL: https://issues.apache.org/jira/browse/HADOOP-15335
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Priority: Major
> Attachments: HADOOP-15335.001.patch, HADOOP-15335.002.patch
>
>
> Print stack information and other info show in WebUI
> http://namenode:50070/stacks?contentionTracing=true
> {code:java}
> Thread 2 (Reference Handler):
>   State: WAITING
>   Blocked count: 8
>   Waited count: 5
>   Thread CpuTime: 591
>   Thread UserTime: 5754000
>   Thread allocatedBytes: 0
>   Waiting on java.lang.ref.Reference$Lock@4b3ed2f0
>   Blocked by -1
>   Stack:
> java.lang.Object.wait(Native Method)
> java.lang.Object.wait(Object.java:502)
> java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
> Thread 1 (main):
>   State: WAITING
>   Blocked count: 4
>   Waited count: 2
>   Thread CpuTime: 2563937000
>   Thread UserTime: 977000
>   Thread allocatedBytes: 229115520
>   Waiting on org.apache.hadoop.ipc.ProtobufRpcEngine$Server@442a2e48
>   Blocked by -1
>   Stack:
> java.lang.Object.wait(Native Method)
> java.lang.Object.wait(Object.java:502)
> org.apache.hadoop.ipc.Server.join(Server.java:2498)
> 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.join(NameNodeRpcServer.java:442)
> org.apache.hadoop.hdfs.server.namenode.NameNode.join(NameNode.java:865)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1573)
> -
> Locks info:
> -
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync@cd6c71a lockedBy 
> Thread 43 (IPC Server handler 7 on 8020) 
>   Waiting thread num: 6
>   Stack:
> 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3665)
> 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:868)
> 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:583)
> 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2072)
> java.security.AccessController.doPrivileged(Native Method)
> javax.security.auth.Subject.doAs(Subject.java:422)
> 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2072)
> java.lang.ref.ReferenceQueue$Lock@31bcf236 lockedBy UNKNOW
> java.lang.ref.Reference$Lock@4b3ed2f0 lockedBy UNKNOW
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server@442a2e48 lockedBy UNKNOW
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433363#comment-16433363
 ] 

genericqa commented on HADOOP-15377:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 3 unchanged - 6 fixed = 3 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918464/HADOOP-15377.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 33f6d2b75d27 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d919eb6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14481/testReport/ |
| Max. process+thread count | 1466 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14481/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Updated] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15377:
-
Status: Patch Available  (was: Open)

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch, HADOOP-15377.2.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433228#comment-16433228
 ] 

genericqa commented on HADOOP-15362:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 52 unchanged - 86 fixed = 62 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918475/HADOOP-15362.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 62470d6d7292 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e813975 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14480/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14480/testReport/ |
| Max. process+thread count | 1766 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433131#comment-16433131
 ] 

Hudson commented on HADOOP-14445:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13961 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13961/])
HADOOP-14445. Delegation tokens are not shared between KMS instances. (xiao: 
rev 583fa6ed48ad3df40bcaa9c591d5ccd07ce3ea81)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSTokenRenewer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtilFaultInjector.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSLegacyTokenRenewer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestKMSUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSDelegationToken.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/package-info.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestKMSClientProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestLoadBalancingKMSClientProvider.java


> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either 

[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.[0-1], branch-2, branch-2.[8-9].

Thanks Rushabh for the initial work and consistent reviews, and all others for 
comments / thoughts!

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.8.4
   2.10.0

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15313) TestKMS should close providers

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15313:
---
Fix Version/s: 3.0.3
   3.1.1

> TestKMS should close providers
> --
>
> Key: HADOOP-15313
> URL: https://issues.apache.org/jira/browse/HADOOP-15313
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15313.01.patch, HADOOP-15313.02.patch, 
> HADOOP-15313.03.patch
>
>
> During the review of HADOOP-14445, [~jojochuang] found that we key providers 
> are not closed in tests. Details in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16397824=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16397824].
> We should investigate and handle that in all related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15313) TestKMS should close providers

2018-04-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433102#comment-16433102
 ] 

Xiao Chen commented on HADOOP-15313:


Cherry-picked to branch-3.1/branch-3.0

> TestKMS should close providers
> --
>
> Key: HADOOP-15313
> URL: https://issues.apache.org/jira/browse/HADOOP-15313
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15313.01.patch, HADOOP-15313.02.patch, 
> HADOOP-15313.03.patch
>
>
> During the review of HADOOP-14445, [~jojochuang] found that we key providers 
> are not closed in tests. Details in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16397824=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16397824].
> We should investigate and handle that in all related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Patch Available  (was: Open)

# Merged with recent changes to trunk
 # Fixed unit test breaks
 # Removed all superfluous end-of-line white space

 

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch, HADOOP-15362.5.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Attachment: HADOOP-15362.5.patch

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch, HADOOP-15362.5.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Open  (was: Patch Available)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch, HADOOP-15362.5.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433084#comment-16433084
 ] 

Hudson commented on HADOOP-15357:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13960 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13960/])
HADOOP-15357. Configuration.getPropsWithPrefix no longer does variable (jlowe: 
rev e81397545a273cf9a090010eb644b836e0ef8c7b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch, 
> HADOOP-15357.003.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433073#comment-16433073
 ] 

Ajay Kumar commented on HADOOP-15377:
-

+1

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch, HADOOP-15377.2.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-10 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15357:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks to [~Jim_Brennan] for the contribution and to [~lmccay] for addtional 
review!  I committed this to trunk, branch-3.1, branch-3.0, branch-2, and 
branch-2.9.

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch, 
> HADOOP-15357.003.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14940) Set default RPC timeout to 5 minutes

2018-04-10 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned HADOOP-14940:
-

Assignee: (was: Yufei Gu)

> Set default RPC timeout to 5 minutes
> 
>
> Key: HADOOP-14940
> URL: https://issues.apache.org/jira/browse/HADOOP-14940
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Priority: Major
>
> The motivation is well described in HADOOP-11252. 
> {quote}
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> {quote}
> However, HADOOP-11252 didn't set the default value to a meaningful timeout(it 
> is zero, which means infinity). User will still hit this issue by default. 
> Maybe we should set the default value to  a meaningful one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432986#comment-16432986
 ] 

BELUGA BEHR commented on HADOOP-15377:
--

Added a 'debug' statement at L131 to provide additional context.

Reverted "toLowerCase" change

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch, HADOOP-15377.2.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15377:
-
Attachment: HADOOP-15377.2.patch

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch, HADOOP-15377.2.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15377:
-
Status: Open  (was: Patch Available)

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432908#comment-16432908
 ] 

Wangda Tan commented on HADOOP-15205:
-

Thanks [~shv], 

I quickly checked, all the releases since 2.8.3 doesn't have source jars. 
(including 3.0.1 released version).

https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-nfs/

You can change the version to see per-release artifacts.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432894#comment-16432894
 ] 

Rushabh S Shah commented on HADOOP-14445:
-

{quote}The branch-2 and branch-2.8 conflicts are minor. Rushabh S Shah do you 
want to give a final pass on them?
{quote}
+1 binding on branch-2 and branch-2.8 patches.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432876#comment-16432876
 ] 

Ajay Kumar commented on HADOOP-15377:
-

LGTM, Shall we add a info message at L131 to reflect which file was not located?

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432855#comment-16432855
 ] 

genericqa commented on HADOOP-15361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 68 unchanged - 2 fixed = 68 total (was 70) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFsTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918417/HADOOP-15361.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c7b42430f82 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cef8eb7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14479/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14479/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432840#comment-16432840
 ] 

genericqa commented on HADOOP-15362:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 61 unchanged - 77 fixed = 64 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.security.alias.TestCredentialProviderFactory |
|   | hadoop.conf.TestConfiguration |
|   | hadoop.security.TestSecurityUtil |
|   | hadoop.security.TestLdapGroupsMapping |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918419/HADOOP-15362.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b7cc1d2e98b5 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cef8eb7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HADOOP-15340) Provide meaningful RPC server name for RpcMetrics

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432785#comment-16432785
 ] 

Hudson commented on HADOOP-15340:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13958 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13958/])
HADOOP-15340. Provide meaningful RPC server name for RpcMetrics. (xyao: rev 
8ab776d61e569c12ec62024415ff68e5d3b10141)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java


> Provide meaningful RPC server name for RpcMetrics
> -
>
> Key: HADOOP-15340
> URL: https://issues.apache.org/jira/browse/HADOOP-15340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15340.001.patch, HADOOP-15340.002.patch, 
> HADOOP-15340.003.patch
>
>
> In case of multiple RPC servers in the same JVM it's hard to identify the 
> metric data. The only available information as of now is the port number.
> Server name is also added in the constructor of Server.java but it's not used 
> at all.
> This patch fix this behaviour:
>  1. The server name is saved to a field in Server.java (constructor signature 
> is not changed)
>  2. ServerName is added as a tag to the metrics in RpcMetrics
>  3. The naming convention for the severs are fix.
> About 3: if the server name is not defined the current code tries to identify 
> the name from the class name. Which is not always an easy task as in some 
> cases the server has a protobuf generated dirty name which also could be an 
> inner class.
> The patch also improved the detection of the name (if it's not defined). It's 
> a compatible change as the current name is not user ad all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15340) Provide meaningful RPC server name for RpcMetrics

2018-04-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-15340:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~elek] for the contribution and all for the reviews. I've committed the 
patch to the trunk. 

> Provide meaningful RPC server name for RpcMetrics
> -
>
> Key: HADOOP-15340
> URL: https://issues.apache.org/jira/browse/HADOOP-15340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15340.001.patch, HADOOP-15340.002.patch, 
> HADOOP-15340.003.patch
>
>
> In case of multiple RPC servers in the same JVM it's hard to identify the 
> metric data. The only available information as of now is the port number.
> Server name is also added in the constructor of Server.java but it's not used 
> at all.
> This patch fix this behaviour:
>  1. The server name is saved to a field in Server.java (constructor signature 
> is not changed)
>  2. ServerName is added as a tag to the metrics in RpcMetrics
>  3. The naming convention for the severs are fix.
> About 3: if the server name is not defined the current code tries to identify 
> the name from the class name. Which is not always an easy task as in some 
> cases the server has a protobuf generated dirty name which also could be an 
> inner class.
> The patch also improved the detection of the name (if it's not defined). It's 
> a compatible change as the current name is not user ad all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432753#comment-16432753
 ] 

Xiao Chen commented on HADOOP-14445:


The branch-2 and branch-2.8 conflicts are minor. [~shahrs87] do you want to 
give a final pass on them?

I plan to commit this 4PM PST today. Thanks!

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15340) Provide meaningful RPC server name for RpcMetrics

2018-04-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-15340:

Summary: Provide meaningful RPC server name for RpcMetrics  (was: Fix the 
RPC server name usage to provide information about the metrics)

> Provide meaningful RPC server name for RpcMetrics
> -
>
> Key: HADOOP-15340
> URL: https://issues.apache.org/jira/browse/HADOOP-15340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15340.001.patch, HADOOP-15340.002.patch, 
> HADOOP-15340.003.patch
>
>
> In case of multiple RPC servers in the same JVM it's hard to identify the 
> metric data. The only available information as of now is the port number.
> Server name is also added in the constructor of Server.java but it's not used 
> at all.
> This patch fix this behaviour:
>  1. The server name is saved to a field in Server.java (constructor signature 
> is not changed)
>  2. ServerName is added as a tag to the metrics in RpcMetrics
>  3. The naming convention for the severs are fix.
> About 3: if the server name is not defined the current code tries to identify 
> the name from the class name. Which is not always an easy task as in some 
> cases the server has a protobuf generated dirty name which also could be an 
> inner class.
> The patch also improved the detection of the name (if it's not defined). It's 
> a compatible change as the current name is not user ad all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432747#comment-16432747
 ] 

genericqa commented on HADOOP-14445:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 26s{color} 
| {color:red} root generated 1 new + 1439 unchanged - 0 fixed = 1440 total (was 
1439) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
313 unchanged - 7 fixed = 313 total (was 320) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
39s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918409/HADOOP-14445.branch-2.06.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 404f0ab5a32e 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / ea51ef4 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14477/artifact/out/diff-compile-javac-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14477/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Comment Edited] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2018-04-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432734#comment-16432734
 ] 

Arpit Agarwal edited comment on HADOOP-12953 at 4/10/18 6:30 PM:
-

Thanks for taking up this change [~bharatviswa].

We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, 
libhdfs_wapper_defines.h etc. 

Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar 
to hdfsConnectAsUser.

Nitpick: single statement if/else blocks should still have curly braces. e.g. 
here:
{code}
if (bld->createProxyUser)
methodToCall = "newInstanceAsProxyUser";
else
methodToCall = "newInstance";
{code}


was (Author: arpitagarwal):
Thanks for taking up this change [~bharatviswa].

We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, 
libhdfs_wapper_defines.h etc. 

Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar 
to hdfsConnectAsUser.

Nitpick: single statement if/else blocks should still have curly braces. e.e. 
here:
{code}
if (bld->createProxyUser)
methodToCall = "newInstanceAsProxyUser";
else
methodToCall = "newInstance";
{code}

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Uday Kale
>Assignee: Uday Kale
>Priority: Major
> Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, 
> HADOOP-12953.003.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2018-04-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432734#comment-16432734
 ] 

Arpit Agarwal commented on HADOOP-12953:


Thanks for taking up this change [~bharatviswa].

We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, 
libhdfs_wapper_defines.h etc. 

Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar 
to hdfsConnectAsUser.

Nitpick: single statement if/else blocks should still have curly braces. e.e. 
here:
{code}
if (bld->createProxyUser)
methodToCall = "newInstanceAsProxyUser";
else
methodToCall = "newInstance";
{code}

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Uday Kale
>Assignee: Uday Kale
>Priority: Major
> Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, 
> HADOOP-12953.003.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Description: 
* Various improvements
 * Fix a lot of checks style errors

When I ran a recent debug log against a MR job, I was spammed from the 
following messages.  I ask that we move them to 'trace' as there is already a 
debug level logging preceding them.
{code:java}
LOG.debug("Handling deprecation for all properties in config");
foreach item {
-  LOG.debug("Handling deprecation for " + (String)item);
+  LOG.trace("Handling deprecation for {}", item);
}{code}

  was:
* Various improvements
 * Fix a lot of checks style errors

When I ran a recent debug log against a MR job, I was spammed from the 
following messages.  I ask that we move them to 'trace' as there is already a 
debug level logging preceding them.
{code:java}
LOG.debug("Handling deprecation for all properties in config");
-  LOG.debug("Handling deprecation for " + (String)item);
+  LOG.trace("Handling deprecation for {}", item);{code}


> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Description: 
* Various improvements
 * Fix a lot of checks style errors

When I ran a recent debug log against a MR job, I was spammed from the 
following messages.  I ask that we move them to 'trace' as there is already a 
debug level logging preceding them.
{code:java}
LOG.debug("Handling deprecation for all properties in config");
-  LOG.debug("Handling deprecation for " + (String)item);
+  LOG.trace("Handling deprecation for {}", item);{code}

  was:
* Various improvements
* Fix a lot of checks style errors


> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Patch Available  (was: Open)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Attachment: HADOOP-15362.4.patch

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Open  (was: Patch Available)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432622#comment-16432622
 ] 

Andras Bokor commented on HADOOP-15361:
---

Patch 02 keeps the behavior of the old logic where necessary.

Let's see what does Hadoop QA think.

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.02.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432596#comment-16432596
 ] 

Steve Loughran commented on HADOOP-15377:
-

LGTM, but do (always) make sure that toLowerCase()/toUpperCase() has the 
LOCALE_EN arg provided, as not all regions do case conversion consistent with 
everyone's expectations.

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432583#comment-16432583
 ] 

Steve Loughran commented on HADOOP-15378:
-

This is bizarre even in the category of bizarre-kerberos errors. Really great 
to have you share this.  Happy to have a section on it on 
https://github.com/steveloughran/kerberos_and_hadoop ; maybe just a link to 
this in the tales of "wierd things", with "The ticket isn't for us" getting a 
callout in error messages.

you thought of running KDiag on the system to see what it showed up...and 
whether it could be improved? Maybe something to check the auth status of IPC 
endpoints: give it a list of endpoints and it'll try to handshake to all of 
them, without bothering to actually say anything afterwards. Could be 
paralllelisable

> Hadoop client unable to relogin because a remote DataNode has an incorrect 
> krb5.conf
> 
>
> Key: HADOOP-15378
> URL: https://issues.apache.org/jira/browse/HADOOP-15378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
> Environment: CDH5.8.3, Kerberized, Impala
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> This is a very weird bug.
> We received a report where a Hadoop client (Impala Catalog server) failed to 
> relogin and crashed every several hours. Initial indication suggested the 
> symptom matched HADOOP-13433.
> But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
> server still kept crashing.
>  
> {noformat}
> W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException):
>  Failure to initialize security context
> W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host2.example@example.com), remove and destroy it.
> W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host3.example@example.com), remove and destroy it.
> W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
> kerberos ticket found while attempting to renew ticket{noformat}
> The error “Failure to initialize security context” is suspicious here. 
> Catalogd was unable to log in because of a Kerberos issue. The JDK expects 
> the first kerberos ticket of a principal to be a TGT, however it seems that 
> after this error, because it was unable to login successfully, the first 
> ticket was no longer a TGT. The patch HADOOP-13433 removed other tickets of 
> the principal, because it expects the TGT to be in the principal’s ticket, 
> which is untrue in this case. So finally, it removed all tickets.
> And then
> {noformat}
> W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]
> {noformat}
> The error “Failed to find any Kerberos tgt” is typically an indication that 
> the user’s Kerberos ticket has expired. However, that’s definitely not the 
> case here, since it was just a little over 8 hours.
> After we patched HADOOP-13433, the error handling code exhibited NPE, as 
> reported in HADOOP-15143.
>  
> {code:java}
> I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
> invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
> host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
> over immediately. Java exception follows: java.io.IOException: Failed on 
> local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "host1.example.com/10.0.121.45"; destination host 
> is: "host4.example.com":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
>  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> 

[jira] [Commented] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432581#comment-16432581
 ] 

genericqa commented on HADOOP-15377:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 40m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 3 unchanged - 6 fixed = 3 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918390/HADOOP-15377.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d0a2f48b1efe 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6729047 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14476/testReport/ |
| Max. process+thread count | 1766 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14476/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: HADOOP-14445.branch-2.06.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: (was: HADOOP-14445.branch-2.06.patch)

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15378:
-
Affects Version/s: 2.6.0

> Hadoop client unable to relogin because a remote DataNode has an incorrect 
> krb5.conf
> 
>
> Key: HADOOP-15378
> URL: https://issues.apache.org/jira/browse/HADOOP-15378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
> Environment: CDH5.8.3, Kerberized, Impala
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> This is a very weird bug.
> We received a report where a Hadoop client (Impala Catalog server) failed to 
> relogin and crashed every several hours. Initial indication suggested the 
> symptom matched HADOOP-13433.
> But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
> server still kept crashing.
>  
> {noformat}
> W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException):
>  Failure to initialize security context
> W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host2.example@example.com), remove and destroy it.
> W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host3.example@example.com), remove and destroy it.
> W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
> kerberos ticket found while attempting to renew ticket{noformat}
> The error “Failure to initialize security context” is suspicious here. 
> Catalogd was unable to log in because of a Kerberos issue. The JDK expects 
> the first kerberos ticket of a principal to be a TGT, however it seems that 
> after this error, because it was unable to login successfully, the first 
> ticket was no longer a TGT. The patch HADOOP-13433 removed other tickets of 
> the principal, because it expects the TGT to be in the principal’s ticket, 
> which is untrue in this case. So finally, it removed all tickets.
> And then
> {noformat}
> W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]
> {noformat}
> The error “Failed to find any Kerberos tgt” is typically an indication that 
> the user’s Kerberos ticket has expired. However, that’s definitely not the 
> case here, since it was just a little over 8 hours.
> After we patched HADOOP-13433, the error handling code exhibited NPE, as 
> reported in HADOOP-15143.
>  
> {code:java}
> I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
> invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
> host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
> over immediately. Java exception follows: java.io.IOException: Failed on 
> local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "host1.example.com/10.0.121.45"; destination host 
> is: "host4.example.com":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
>  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  at com.sun.proxy.$Proxy10.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:55)
>  at 
> org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:33)
>  at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>  at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>  at 
> 

[jira] [Commented] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432530#comment-16432530
 ] 

Wei-Chiu Chuang commented on HADOOP-15378:
--

[~Apache9] appreciate if you could also look at this one since you were the 
author of HADOOP-13433.

> Hadoop client unable to relogin because a remote DataNode has an incorrect 
> krb5.conf
> 
>
> Key: HADOOP-15378
> URL: https://issues.apache.org/jira/browse/HADOOP-15378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
> Environment: CDH5.8.3, Kerberized, Impala
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> This is a very weird bug.
> We received a report where a Hadoop client (Impala Catalog server) failed to 
> relogin and crashed every several hours. Initial indication suggested the 
> symptom matched HADOOP-13433.
> But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
> server still kept crashing.
>  
> {noformat}
> W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException):
>  Failure to initialize security context
> W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host2.example@example.com), remove and destroy it.
> W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host3.example@example.com), remove and destroy it.
> W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
> kerberos ticket found while attempting to renew ticket{noformat}
> The error “Failure to initialize security context” is suspicious here. 
> Catalogd was unable to log in because of a Kerberos issue. The JDK expects 
> the first kerberos ticket of a principal to be a TGT, however it seems that 
> after this error, because it was unable to login successfully, the first 
> ticket was no longer a TGT. The patch HADOOP-13433 removed other tickets of 
> the principal, because it expects the TGT to be in the principal’s ticket, 
> which is untrue in this case. So finally, it removed all tickets.
> And then
> {noformat}
> W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]
> {noformat}
> The error “Failed to find any Kerberos tgt” is typically an indication that 
> the user’s Kerberos ticket has expired. However, that’s definitely not the 
> case here, since it was just a little over 8 hours.
> After we patched HADOOP-13433, the error handling code exhibited NPE, as 
> reported in HADOOP-15143.
>  
> {code:java}
> I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
> invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
> host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
> over immediately. Java exception follows: java.io.IOException: Failed on 
> local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "host1.example.com/10.0.121.45"; destination host 
> is: "host4.example.com":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
>  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  at com.sun.proxy.$Proxy10.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:55)
>  at 
> org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:33)
>  at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>  at 
> 

[jira] [Created] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-10 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15378:


 Summary: Hadoop client unable to relogin because a remote DataNode 
has an incorrect krb5.conf
 Key: HADOOP-15378
 URL: https://issues.apache.org/jira/browse/HADOOP-15378
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
 Environment: CDH5.8.3, Kerberized, Impala
Reporter: Wei-Chiu Chuang


This is a very weird bug.

We received a report where a Hadoop client (Impala Catalog server) failed to 
relogin and crashed every several hours. Initial indication suggested the 
symptom matched HADOOP-13433.

But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
server still kept crashing.

 
{noformat}
W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
PriviledgedActionException as:impala/host1.example@example.com 
(auth:KERBEROS) 
cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
Failure to initialize security context
W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first kerberos 
ticket is not TGT(the server principal is hdfs/host2.example@example.com), 
remove and destroy it.
W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first kerberos 
ticket is not TGT(the server principal is hdfs/host3.example@example.com), 
remove and destroy it.
W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
kerberos ticket found while attempting to renew ticket{noformat}
The error “Failure to initialize security context” is suspicious here. Catalogd 
was unable to log in because of a Kerberos issue. The JDK expects the first 
kerberos ticket of a principal to be a TGT, however it seems that after this 
error, because it was unable to login successfully, the first ticket was no 
longer a TGT. The patch HADOOP-13433 removed other tickets of the principal, 
because it expects the TGT to be in the principal’s ticket, which is untrue in 
this case. So finally, it removed all tickets.

And then
{noformat}
W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
PriviledgedActionException as:impala/host1.example@example.com 
(auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]
{noformat}
The error “Failed to find any Kerberos tgt” is typically an indication that the 
user’s Kerberos ticket has expired. However, that’s definitely not the case 
here, since it was just a little over 8 hours.

After we patched HADOOP-13433, the error handling code exhibited NPE, as 
reported in HADOOP-15143.

 
{code:java}
I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
over immediately. Java exception follows: java.io.IOException: Failed on local 
exception: java.io.IOException: Couldn't set up IO streams; Host Details : 
local host is: "host1.example.com/10.0.121.45"; destination host is: 
"host4.example.com":8020; at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
 at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
 at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
 at com.sun.proxy.$Proxy10.listCachePools(Unknown Source) at 
org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:55)
 at 
org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:33)
 at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
 at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
 at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
 at 
com.cloudera.impala.catalog.CatalogServiceCatalog$CachePoolReader.run(CatalogServiceCatalog.java:193)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 

[jira] [Commented] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432519#comment-16432519
 ] 

genericqa commented on HADOOP-15362:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 61 unchanged - 77 fixed = 64 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskChecker |
|   | hadoop.security.TestLdapGroupsMapping |
|   | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.security.TestSecurityUtil |
|   | hadoop.security.alias.TestCredentialProviderFactory |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.conf.TestConfiguration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918384/HADOOP-15362.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7b2ddcb0ad68 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7c1e77d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-10 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15357:

Affects Version/s: 2.9.0
   3.0.0
 Target Version/s: 3.2.0, 3.1.1, 2.9.2, 3.0.3

+1 the latest patch looks good to me as well.  I'll commit this later today if 
there are no objections.

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch, 
> HADOOP-15357.003.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432449#comment-16432449
 ] 

Hudson commented on HADOOP-15376:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13954 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13954/])
HADOOP-15376. Remove double semi colons on imports that make Clover fall 
(aajisaka: rev cef8eb79810383f9970ed3713deecc18fbf0ffaa)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java


> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-10 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432419#comment-16432419
 ] 

Jim Brennan commented on HADOOP-15357:
--

[~jlowe], [~lmccay], let me know if you would prefer that I add another test to 
cover the deprecation feature via this interface.  Otherwise, I think this may 
be ready to be committed.

 

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch, 
> HADOOP-15357.003.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15374) Add links of the new features of 3.1.0 to the top page

2018-04-10 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432417#comment-16432417
 ] 

Takanobu Asanuma commented on HADOOP-15374:
---

Thanks [~ajisakaa]!

> Add links of the new features of 3.1.0 to the top page
> --
>
> Key: HADOOP-15374
> URL: https://issues.apache.org/jira/browse/HADOOP-15374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15374.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15376:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.1. Thanks [~ehiggs]!

> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Trivial
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15376:
---
Priority: Minor  (was: Trivial)

> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432378#comment-16432378
 ] 

Akira Ajisaka commented on HADOOP-15376:


+1

> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Trivial
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15377:
-
Status: Patch Available  (was: Open)

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15377:
-
Attachment: HADOOP-15377.1.patch

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-10 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432304#comment-16432304
 ] 

Gabor Bota commented on HADOOP-14756:
-

Thank you for the review [~fabbri]. This cleared what's the expected behavior. 
I'll be able to create a patch soon.

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HADOOP-15377:


Assignee: BELUGA BEHR

> Review of MetricsConfig.java
> 
>
> Key: HADOOP-15377
> URL: https://issues.apache.org/jira/browse/HADOOP-15377
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15377.1.patch
>
>
> I recently enabled debug level logging in a MR application and was getting a 
> lot of log lines from this class that were just blank, without context.  I've 
> enhanced the log messages to include additional context and a few other small 
> clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15377) Review of MetricsConfig.java

2018-04-10 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-15377:


 Summary: Review of MetricsConfig.java
 Key: HADOOP-15377
 URL: https://issues.apache.org/jira/browse/HADOOP-15377
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.0.2
Reporter: BELUGA BEHR
 Attachments: HADOOP-15377.1.patch

I recently enabled debug level logging in a MR application and was getting a 
lot of log lines from this class that were just blank, without context.  I've 
enhanced the log messages to include additional context and a few other small 
clean up while looking at this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Attachment: HADOOP-15362.3.patch

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Open  (was: Patch Available)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-10 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Patch Available  (was: Open)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-10 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432145#comment-16432145
 ] 

Gabor Bota commented on HADOOP-15239:
-

If there's a need I also created a test for it. You can check it in this 
commit: 
https://github.com/bgaborg/hadoop/commit/58508d484576a5ee752a1310d0946bfa9f7cf9e7

If a test is needed for this fix, I can upload this as a patch.

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-5342) DataNodes do not start up because InconsistentFSStateException on just part of the disks in use

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-5342.
--
Resolution: Cannot Reproduce

Last reported occurrence was in 2010 so closing as Cannot Reproduce. Please 
reopen if you still experience this.

> DataNodes do not start up because InconsistentFSStateException on just part 
> of the disks in use
> ---
>
> Key: HADOOP-5342
> URL: https://issues.apache.org/jira/browse/HADOOP-5342
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.18.2
>Reporter: Christian Kunz
>Assignee: Hairong Kuang
>Priority: Critical
>
> After restarting a cluster (including rebooting) the dfs got corrupted 
> because many DataNodes did not start up, running into the following exception:
> 2009-02-26 22:33:53,774 ERROR org.apache.hadoop.dfs.DataNode: 
> org.apache.hadoop.dfs.InconsistentFSStateException: Directory xxx  is in an 
> inconsistent state: version file in current directory is missing.
>   at 
> org.apache.hadoop.dfs.Storage$StorageDirectory.analyzeStorage(Storage.java:326)
>   at 
> org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:105)
>   at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:306)
>   at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
>   at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3030)
>   at 
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2985)
>   at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2993)
>   at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3115)
> This happens when using multiple disks with at least one previously marked as 
> read-only, such that the storage version became out-dated, but after reboot 
> it was mounted read-write, resulting in the DataNode not starting because of 
> out-dated version.
> This is a big headache. If a DataNode has multiple disks of which at least 
> one has the correct storage version then out-dated versions should not bring 
> down the DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7031) Make DelegateToFileSystem constructor public to allow implementations from other packages for testing

2018-04-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-7031.
--
Resolution: Won't Fix

There was no activity or at least new watcher on this issue in the last 7 years 
so closing for now.

> Make DelegateToFileSystem constructor public to allow implementations from 
> other packages for testing
> -
>
> Key: HADOOP-7031
> URL: https://issues.apache.org/jira/browse/HADOOP-7031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Krishna Ramachandran
>Priority: Major
>
> Mapreduce tests use ileSystem APIs to implement a TestFileSystem to simulate 
> various error and failure conditions. This is no longer possible with the new 
> FileContext APIs
> for example we would like to extend DelegateToFileSystem in unit testing 
> framework 
>   public static class TestFileSystem extends DelegateToFileSystem {
> public TestFileSystem(Configuration conf) throws IOException, 
> URISyntaxException {
>   super(URI.create("faildel:///"), new FakeFileSystem(conf), conf, 
> "faildel",
>   false);
> }
>   }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432092#comment-16432092
 ] 

genericqa commented on HADOOP-15376:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15376 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918346/HADOOP-15376.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e16450d0aa7c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e87be8a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14474/testReport/ |
| Max. process+thread count | 1505 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14474/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove double semi colons on imports that make Clover fall over.
> 
>
> 

[jira] [Commented] (HADOOP-15374) Add links of the new features of 3.1.0 to the top page

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432027#comment-16432027
 ] 

Hudson commented on HADOOP-15374:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13951 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13951/])
HADOOP-15374. Add links of the new features of 3.1.0 to the top page (aajisaka: 
rev 7623cc5a982219fff2bdd9a84650f45106cbdf47)
* (edit) hadoop-project/src/site/site.xml


> Add links of the new features of 3.1.0 to the top page
> --
>
> Key: HADOOP-15374
> URL: https://issues.apache.org/jira/browse/HADOOP-15374
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15374.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15374) Add links of the new features of 3.1.0 to the top page

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15374:
---
Issue Type: Bug  (was: Improvement)

> Add links of the new features of 3.1.0 to the top page
> --
>
> Key: HADOOP-15374
> URL: https://issues.apache.org/jira/browse/HADOOP-15374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15374.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15374) Add links of the new features of 3.1.0 to the top page

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15374:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.1. Thanks [~tasanuma0829]!

> Add links of the new features of 3.1.0 to the top page
> --
>
> Key: HADOOP-15374
> URL: https://issues.apache.org/jira/browse/HADOOP-15374
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15374.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15374) Add links of the new features of 3.1.0 to the top page

2018-04-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432009#comment-16432009
 ] 

Akira Ajisaka commented on HADOOP-15374:


LGTM, +1

> Add links of the new features of 3.1.0 to the top page
> --
>
> Key: HADOOP-15374
> URL: https://issues.apache.org/jira/browse/HADOOP-15374
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15374.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15376:

Assignee: Ewan Higgs
  Status: Patch Available  (was: Open)

> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Trivial
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15376:

Attachment: HADOOP-15376.01.patch

> Remove double semi colons on imports that make Clover fall over.
> 
>
> Key: HADOOP-15376
> URL: https://issues.apache.org/jira/browse/HADOOP-15376
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Priority: Trivial
> Attachments: HADOOP-15376.01.patch
>
>
> Clover will fall over if there are double semicolons on imports.
> The error looks like:
> {code:java}
> [INFO] Clover free edition.
> [INFO] Updating existing database at 
> '/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
> [INFO] Processing files at 1.8 source level.
> [INFO] 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'
> [INFO] Instrumentation error
> com.atlassian.clover.api.CloverException: 
> /Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
>  EOF, found 'import'{code}
>  
> Thankfully we only have one location with this:
> {code:java}
> $ find . -name \*.java -exec grep '^import .*;;' {} +
> ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
>  org.apache.commons.io.FileUtils;;{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-15376:
---

 Summary: Remove double semi colons on imports that make Clover 
fall over.
 Key: HADOOP-15376
 URL: https://issues.apache.org/jira/browse/HADOOP-15376
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ewan Higgs


Clover will fall over if there are double semicolons on imports.

The error looks like:
{code:java}
[INFO] Clover free edition.
[INFO] Updating existing database at 
'/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
[INFO] Processing files at 1.8 source level.
[INFO] 
/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
 EOF, found 'import'
[INFO] Instrumentation error
com.atlassian.clover.api.CloverException: 
/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
 EOF, found 'import'{code}
 

Thankfully we only have one location with this:
{code:java}
$ find . -name \*.java -exec grep '^import .*;;' {} +
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
 org.apache.commons.io.FileUtils;;{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431856#comment-16431856
 ] 

genericqa commented on HADOOP-14445:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
199 unchanged - 3 fixed = 199 total (was 202) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:749e106 |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918323/HADOOP-14445.branch-2.8.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6f3621e14833 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 5f8ab3a |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14473/testReport/ |
| Max. process+thread count | 1405 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14473/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: HADOOP-14445.branch-2.8.006.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431789#comment-16431789
 ] 

Xiao Chen commented on HADOOP-14445:


I'm not sure how to build locally to get a dry run of the javac from 
pre-commit, but I think this should fix them all...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: HADOOP-14445.branch-2.06.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431774#comment-16431774
 ] 

genericqa commented on HADOOP-14445:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 20s{color} 
| {color:red} root generated 2 new + 1438 unchanged - 1 fixed = 1440 total (was 
1439) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
313 unchanged - 6 fixed = 313 total (was 319) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
45s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918312/HADOOP-14445.branch-2.05.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 93594b545daa 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / f667ef1 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14471/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14471/testReport/ |
| Max. process+thread count | 1450 (vs. ulimit of 1) |
| modules | C: