[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207046#comment-16207046
 ] 

Aaron Fabbri commented on HADOOP-13786:
---

I'm still reviewing and testing this stuff.   Looks pretty good but takes time 
to cover 26,000 line diff with any rigor.  My finding-bugs-by-inspection rate 
has been pretty low so far, congrats. ;-)

{quote}
What I'd like to suggest here is we create a branch for the S3Guard phase II 
work (HADOOP-14825), make this the first commit & then work on the s3guard II 
improvements above it.
{quote}

I appreciate the awesome work here. My two cents, taking a step back a bit: I 
think we should try to move towards small patches and short-lived feature 
branches. How long do we expect the feature branch to live?  Two to four weeks 
is reasonable IMO.  2 is better.

I'd like to make the case for keeping the main codepaths solid and integrated 
and feature-flag (config) new work, instead of having major rewrites living 
outside trunk for too long.  Two main reasons: (1) Not blocking other work (2) 
better quality / less risk.. as we approach continuous integration we get 
quality benefits.  Happy to elaborate on that if needed.  :-)

The fantastic work folks like [~ste...@apache.org] have done on improving our 
tests really make this possible.  We should take advantage of it.



> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-10-16 Thread Zhiyuan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207041#comment-16207041
 ] 

Zhiyuan Yang commented on HADOOP-13921:
---

[~busbey] That helps to some extent, but I don't have confidence that people 
will always read the doc. The best way to let people notice is to break their 
code as early as possible. Otherwise rule without enforcement doesn't really 
make difference.

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14944:
---
Attachment: HADOOP-14944.branch-2.02.patch

Thanks John, attaching a branch-2 patch.

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.06.patch, HADOOP-14944.branch-2.01.patch, 
> HADOOP-14944.branch-2.02.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207025#comment-16207025
 ] 

Sean Busbey commented on HADOOP-13921:
--

How can we update the description in the release notes (ie. for 
[3.0.0-alpha4|http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-common/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.html])
 to make this change easier to spot for downstream folks?

It's not obvious from TEZ-3853 which version of Hadoop 3 you first attempted to 
update to. Would calling out the earlier alpha/beta release notes have made it 
easier to have a heads up?

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207008#comment-16207008
 ] 

John Zhuge commented on HADOOP-14944:
-

+1 LGTM!   Good catch in shutdownSingleton to unregisterSource.

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.06.patch, HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14944:
---
Attachment: HADOOP-14944.06.patch

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.06.patch, HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207000#comment-16207000
 ] 

Xiao Chen commented on HADOOP-14944:


Thanks John for taking the effort and presenting a patch directly! Agree this 
now feels mostly developing the JvmMetrics class than kms - still in the mighty 
hadoop code base. :)

I like your idea to provide a less hacky solution. Implementation-wise, though, 
{{shutdown}} is not really shutting down now. I made some improvement in patch 
6 to bring more parity in shutdown to init, and enriched javadoc to explain. 
For the same reason javadoc explained, I think we're good now to not changing 
all other places where {{initSingleton}} is invoked to also shutdown. Hoping 
this javadoc would be helpful for people to add {{JvmMetrics}} to other classes 
in the future.

Unrelated, I found {{JvmMetrics}}' constructor not being a private is not 
helping. Added a {{VisibleForTesting}} annotation to it. 
Also, {{TestJvmMetrics}} is testing more pause monitor than jvm metrics, but 
I'd like to leave this out from this jira to keep our focus.

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206998#comment-16206998
 ] 

John Zhuge commented on HADOOP-14944:
-

Please note JvmMetrics.Singleton#impl is not refcounted, thus in the following 
call sequence:
1. JvmMetrics.initSingleton
2. JvmMetrics.initSingleton
3. JvmMetrics.shutdownSingleton
4. JvmMetrics.shutdownSingleton
The singleton is no longer usable after step 3. However, I don't think we need 
to support this use case. 


> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206974#comment-16206974
 ] 

Hadoop QA commented on HADOOP-14944:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HADOOP-14944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892513/HADOOP-14944.05.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 72676f6e79f7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b406d8e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13524/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 

[jira] [Updated] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14944:

Attachment: HADOOP-14944.05.patch

[~xiaochen] Thanks for your patience. This seemingly simple JIRA reveals the 
ugliness in testing the singletons.

The patch 04 looks great overall except the new miniClusterMode flag. I think 
we can take a different approach shown in patch 05 I just uploaded. Easier for 
me to show the code instead of posting review comments. The approach is to 
shutdown the JvmMetrics singleton in {{KMSWebServer#stop}}, right before 
calling {{DefaultMetricsSystem.shutdown}}. Let me know what you think.

Passed {{mvn test -P\!shelltest -Dtest=TestJvmMetrics,TestKMS,TestKMSWithZK}}.

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.05.patch, 
> HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206909#comment-16206909
 ] 

Akira Ajisaka commented on HADOOP-14949:


Thank you, [~xiaochen]!

> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HADOOP-14949.01.patch, HADOOP-14949.02.patch, 
> HADOOP-14949.03.patch
>
>
> We have seen some intermittent failures of this test:
> Error Message
> {noformat}
> java.lang.AssertionError
> {noformat}
> Stack Trace
> {noformat}java.lang.AssertionError: Should not have been able to 
> reencryptEncryptedKey
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1616)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1608)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:313)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:97)
> {noformat}
> Standard Output
> {noformat}
> 2017-10-07 09:44:11,112 INFO  log - jetty-6.1.26.cloudera.4
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   Java runtime version : 
> 1.7.0_121-b00
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   User: slave
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   KMS Hadoop Version: 
> 2.6.0-cdh5.14.0-SNAPSHOT
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'CREATE' ACL 'CREATE,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'DELETE' ACL 'DELETE'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'ROLLOVER' ACL 
> 'ROLLOVER,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'GET' ACL 'GET'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_KEYS' ACL 'GET_KEYS'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_METADATA' ACL 'GET_METADATA'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL 
> 'SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GENERATE_EEK' ACL 'GENERATE_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'DECRYPT_EEK' ACL 'DECRYPT_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k0' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k1' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,136 INFO  KMSAudit - No audit logger configured, using 
> default.
> 2017-10-07 09:44:11,137 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2017-10-07 09:44:11,137 INFO  KMSWebApp - Initialized KeyProvider 
> CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Default key bitlength is 128
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - KMS Started
> 2017-10-07 09:44:11,141 INFO  PackagesResourceConfig - Scanning for root 
> resource and provider classes in the packages:
>   org.apache.hadoop.crypto.key.kms.server
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Root resource classes 
> found:
>   class org.apache.hadoop.crypto.key.kms.server.KMS
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Provider classes found:
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter
>   class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONReader
> 2017-10-07 09:44:11,147 INFO  WebApplicationImpl - Initiating Jersey 
> application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
> 2017-10-07 09:44:11,224 INFO  log - Started SocketConnector@localhost:46764
> Test KMS running at: http://localhost:46764/kms
> 2017-10-07 09:44:11,254 INFO  kms-audit - UNAUTHORIZED[op=CREATE_KEY, key=k, 
> user=client] 
> 2017-10-07 09:44:11,255 WARN  KMS - User cli...@example.com (auth:KERBEROS) 
> request POST http://localhost:46764/kms/v1/keys caused exception.

[jira] [Commented] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206884#comment-16206884
 ] 

John Zhuge commented on HADOOP-14954:
-

When {{DefaultMetricsSystem#miniClusterMode}} is set to true, {{init}} creates 
a new object every time instead of reusing the singleton, for testing purpose! 
Really messed up.

{{refCount}} no longer tracks the references for the singleton, rather it 
tracks the #calls of {{init}} minus the #calls of {{shutdown}}. So the patch 
still works. An alternative is to update {{shutdown}} so as not to decrement 
refCount in mini cluster mode.


> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14955) Document the support of multiple authentications for HTTP

2017-10-16 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-14955:
-

 Summary: Document the support of multiple authentications for HTTP
 Key: HADOOP-14955
 URL: https://issues.apache.org/jira/browse/HADOOP-14955
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony


Due to the enhancements done via  HADOOP-12082,  hadoop supports multiple 
authentications for the HTTP protocol.

This needs to be documented for wider usage



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14948) Document missing config key hadoop.treat.subject.external

2017-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206807#comment-16206807
 ] 

Hudson commented on HADOOP-14948:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13095 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13095/])
HADOOP-14948. Document missing config key hadoop.treat.subject.external. 
(weichiu: rev e906108fc98a011630d12a43e557b81d7ef7ea5d)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Document missing config key hadoop.treat.subject.external
> -
>
> Key: HADOOP-14948
> URL: https://issues.apache.org/jira/browse/HADOOP-14948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 2.8.3, 3.0.0
>
> Attachments: HADOOP-14948.01.patch
>
>
> HADOOP-13805 added the config key hadoop.treat.subject.external, but which is 
> not properly documented. File this jira to add it to core-default.xml



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API

2017-10-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206808#comment-16206808
 ] 

Haibo Chen commented on HADOOP-14238:
-

Manually checking hadoop-yarn-api, hadoop-yarn-applications, 
hadoop-yarn-common, hadoop-yarn-client and hadoop-yarn-registry,
there is no exposure either.
We do, however, expose Optional in ReconfigurationTaskStatus which is marked as 
public and stable. I think this needs to be fixed. 

It'll still be great if we can still get the tool to double check just in case, 
cc [~andrew.wang].  

> [Umbrella] Rechecking Guava's object is not exposed to user-facing API
> --
>
> Key: HADOOP-14238
> URL: https://issues.apache.org/jira/browse/HADOOP-14238
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Critical
>
> This is reported by [~hitesh] on HADOOP-10101.
> At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-14938:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~mi...@cloudera.com].  Committed to branch-3.0 and branch-2 (and 
previously trunk)!

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14948) Document missing config key hadoop.treat.subject.external

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14948:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   2.8.3
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed the patch rev 01 to trunk (3.0.0), branch-2 (2.9.0) and branch-2.8 
(2.8.3)
Thanks Ajay Kumar!

> Document missing config key hadoop.treat.subject.external
> -
>
> Key: HADOOP-14948
> URL: https://issues.apache.org/jira/browse/HADOOP-14948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 2.8.3, 3.0.0
>
> Attachments: HADOOP-14948.01.patch
>
>
> HADOOP-13805 added the config key hadoop.treat.subject.external, but which is 
> not properly documented. File this jira to add it to core-default.xml



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14948) Document missing config key hadoop.treat.subject.external

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206785#comment-16206785
 ] 

Wei-Chiu Chuang commented on HADOOP-14948:
--

+1 will commit shortly.

> Document missing config key hadoop.treat.subject.external
> -
>
> Key: HADOOP-14948
> URL: https://issues.apache.org/jira/browse/HADOOP-14948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14948.01.patch
>
>
> HADOOP-13805 added the config key hadoop.treat.subject.external, but which is 
> not properly documented. File this jira to add it to core-default.xml



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13807) UGI renewal thread should be spawn only if the keytab is not external

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-13807.
--
Resolution: Duplicate

Based on my understanding of this jira, it is a dup of HADOOP-13805.
I'll close this jira as a result. Please reopen if this is not the case. Thanks!

> UGI renewal thread should be spawn only if the keytab is not external
> -
>
> Key: HADOOP-13807
> URL: https://issues.apache.org/jira/browse/HADOOP-13807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha1
>Reporter: Alejandro Abdelnur
>Priority: Minor
>
> The renewal thread should not be spawned if the keytab is external.
> Because of HADOOP-13805 there can be a case that an UGI does not have a 
> keytab because authentication is managed by the host program. In such case we 
> should not spawn the renewal thread.
> Currently this is logging a warning "Exception encountered while running the 
> renewal command. Aborting renew thread. " and exiting the thread. The warning 
> may be misleading and running the thread is not really needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206783#comment-16206783
 ] 

Hadoop QA commented on HADOOP-14944:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | HADOOP-14944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892486/HADOOP-14944.branch-2.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e07ff51d4a48 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b876a93 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13523/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13523/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> 

[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API

2017-10-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206781#comment-16206781
 ] 

Haibo Chen commented on HADOOP-14238:
-

I checked mapreduce client modules manually, that is, 
hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-core, 
hadoop-mapreduce-client-common. There is no exposure of GUAVA in the public api.

> [Umbrella] Rechecking Guava's object is not exposed to user-facing API
> --
>
> Key: HADOOP-14238
> URL: https://issues.apache.org/jira/browse/HADOOP-14238
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Critical
>
> This is reported by [~hitesh] on HADOOP-10101.
> At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206769#comment-16206769
 ] 

Wei-Chiu Chuang commented on HADOOP-12502:
--

Hi [~vinayrpet] thanks for the patch and I finally had the chance to review it.
Overall it looks good to me, and it looks like it also prevents OOM for most of 
commands, which is good.

One question though: is it necessary to introduce a new FileSystem API 
listStatusIterator(final Path p, final PathFilter filter)?
>From my perspective it seems a useful addition, but doesn't need to be 
>included in this patch. Adding a new FileSystem API is always concerning.

> SetReplication OutOfMemoryError
> ---
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Philipp Schuegerl
>Assignee: Vinayakumar B
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, 
> HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, 
> HADOOP-12502-06.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory. 
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>   at java.util.Arrays.copyOfRange(Arrays.java:2694)
>   at java.lang.String.(String.java:203)
>   at java.lang.String.substring(String.java:1913)
>   at java.net.URI$Parser.substring(URI.java:2850)
>   at java.net.URI$Parser.parse(URI.java:3046)
>   at java.net.URI.(URI.java:753)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   at org.apache.hadoop.fs.Path.(Path.java:116)
>   at org.apache.hadoop.fs.Path.(Path.java:94)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206749#comment-16206749
 ] 

Robert Kanter commented on HADOOP-14938:


Thanks for the branch-2 and branch-3.0 patches.  The test failure looks 
unrelated.  I ran it locally with no problem.  
+1 will commit shortly.

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-10-16 Thread Zhiyuan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206726#comment-16206726
 ] 

Zhiyuan Yang commented on HADOOP-13921:
---

This breaks user code in a bad way, like TEZ-3853. Changing the type but 
keeping the name make it hard to discover the incompatibility. User code can 
compile on both hadoop2 and hadoop3, but binaries cannot run on both version. 
Error appears only when relevant code path get triggered.

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14944:
---
Attachment: HADOOP-14944.branch-2.01.patch

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206709#comment-16206709
 ] 

Xiao Chen commented on HADOOP-14944:


Discussed John offline, and both of us investigated around this. Summarizing 
below:
- Test failure in patch 1 was due to {{MetricsSystemImpl}}'s init and register 
is different. Created HDFS-12668 for that. (But looking again now this should 
be okay if we {{DefaultMetricsSystem.setMiniClusterMode(true)}} in the test)
- For trunk, {{KMSWebServer}} would be a better place since that's the 'main' 
class where start/stop happens.
- For branch-2, since there is no 'main' class currently, this will go with 
{{KMSWebApp}}. This may be an issue with [Tomcat virtual 
host|https://tomcat.apache.org/tomcat-6.0-doc/virtual-hosting-howto.html], in 
the sense that {{contextInitialized}} could be called multiple times. The 
metrics are still fine since it's singleton and reference counted, but multiple 
{{JvmPauseMonitor}}'s may be created. Solving that would involve in fundamental 
changes in how the pause monitor and the metrics class interact, so that will 
be left for a future jira.

Also, when verifying the test, running {{TestKMS#testKMSJMX}} alone passes, but 
running with the whole test class, {{testKMSJMX}} fails. This is because the 
JvmMetrics' singleton's {{impl}} won't change once it's created by the first 
test case in the JVM. Added a flag similar to {{DefaultMetricsSystem}} to get 
over this.

Attached patch 4 for trunk and patch 1 for branch-2.

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14944) Add JvmMetrics to KMS

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14944:
---
Attachment: HADOOP-14944.04.patch

> Add JvmMetrics to KMS
> -
>
> Key: HADOOP-14944
> URL: https://issues.apache.org/jira/browse/HADOOP-14944
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14944.01.patch, HADOOP-14944.02.patch, 
> HADOOP-14944.03.patch, HADOOP-14944.04.patch, HADOOP-14944.branch-2.01.patch
>
>
> Let's make KMS to use {{JvmMetrics}} to report aggregated statistics about 
> heap / GC etc.
> This will make statistics monitoring easier, and compatible across the 
> tomcat-based (branch-2) KMS and the jetty-based (branch-3) KMS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206697#comment-16206697
 ] 

Hadoop QA commented on HADOOP-14954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  3s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HADOOP-14954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892459/HADOOP-14954.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c7948841b84 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1fcbe7c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13522/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13522/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> 

[jira] [Commented] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206694#comment-16206694
 ] 

Wei-Chiu Chuang commented on HADOOP-14880:
--

Hi [~gabor.bota] thanks for the patch!
I'm sorry I meant to state the configs should be documented in 
core-default.xml. 

kms-default.xml and kms-site.xml are typically used by KMS servers. They are 
not supposed to store client specific configs.
You can also find similar configs like 
hadoop.security.kms.client.encrypted.key.cache.num.refill.threads in 
core-default.xml.

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Status: In Progress  (was: Patch Available)

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206692#comment-16206692
 ] 

Misha Dmitriev commented on HADOOP-14938:
-

As far as I understand, the report above says that there are no failed test, 
but some other kind of failure somewhere.

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Status: Patch Available  (was: In Progress)

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206675#comment-16206675
 ] 

Hadoop QA commented on HADOOP-14938:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 131 unchanged - 15 fixed = 132 total (was 146) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.log.TestLogLevel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | HADOOP-14938 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892463/HADOOP-14938.branch-2.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 90cff70e2640 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 0bddcf1 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13521/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13521/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13521/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13521/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Comment Edited] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2017-10-16 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206665#comment-16206665
 ] 

Junping Du edited comment on HADOOP-13579 at 10/16/17 10:09 PM:


Hi [~ozawa] and [~ajisakaa], I think this patch need to be backport to 2.8 and 
branch-2. Isn't it? Do we have special reason not to do so?


was (Author: djp):
Hi [~ozawa] and [~ajisakaa], I think this patch need to be backport to 2.8 and 
branch-2. Isn't it?

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Fix For: 2.6.5, 2.7.4
>
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.6.003.patch, 
> HADOOP-13579-branch-2.7.001.patch, HADOOP-13579-branch-2.7.002.patch, 
> HADOOP-13579-branch-2.7.003.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2017-10-16 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206665#comment-16206665
 ] 

Junping Du commented on HADOOP-13579:
-

Hi [~ozawa] and [~ajisakaa], I think this patch need to be backport to 2.8 and 
branch-2. Isn't it?

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Fix For: 2.6.5, 2.7.4
>
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.6.003.patch, 
> HADOOP-13579-branch-2.7.001.patch, HADOOP-13579-branch-2.7.002.patch, 
> HADOOP-13579-branch-2.7.003.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2017-10-16 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned HADOOP-13579:
---

Assignee: Tsuyoshi Ozawa  (was: Junping Du)

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Fix For: 2.6.5, 2.7.4
>
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.6.003.patch, 
> HADOOP-13579-branch-2.7.001.patch, HADOOP-13579-branch-2.7.002.patch, 
> HADOOP-13579-branch-2.7.003.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2017-10-16 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned HADOOP-13579:
---

Assignee: Junping Du  (was: Tsuyoshi Ozawa)

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Junping Du
>Priority: Blocker
> Fix For: 2.6.5, 2.7.4
>
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.6.003.patch, 
> HADOOP-13579-branch-2.7.001.patch, HADOOP-13579-branch-2.7.002.patch, 
> HADOOP-13579-branch-2.7.003.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206649#comment-16206649
 ] 

Hudson commented on HADOOP-14949:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13092 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13092/])
HADOOP-14949. TestKMS#testACLs fails intermittently. (xiao: rev 
b7ff624c767f76ca007d695afdc7a3815fceb04c)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HADOOP-14949.01.patch, HADOOP-14949.02.patch, 
> HADOOP-14949.03.patch
>
>
> We have seen some intermittent failures of this test:
> Error Message
> {noformat}
> java.lang.AssertionError
> {noformat}
> Stack Trace
> {noformat}java.lang.AssertionError: Should not have been able to 
> reencryptEncryptedKey
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1616)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1608)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:313)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:97)
> {noformat}
> Standard Output
> {noformat}
> 2017-10-07 09:44:11,112 INFO  log - jetty-6.1.26.cloudera.4
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   Java runtime version : 
> 1.7.0_121-b00
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   User: slave
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   KMS Hadoop Version: 
> 2.6.0-cdh5.14.0-SNAPSHOT
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'CREATE' ACL 'CREATE,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'DELETE' ACL 'DELETE'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'ROLLOVER' ACL 
> 'ROLLOVER,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'GET' ACL 'GET'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_KEYS' ACL 'GET_KEYS'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_METADATA' ACL 'GET_METADATA'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL 
> 'SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GENERATE_EEK' ACL 'GENERATE_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'DECRYPT_EEK' ACL 'DECRYPT_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k0' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k1' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,136 INFO  KMSAudit - No audit logger configured, using 
> default.
> 2017-10-07 09:44:11,137 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2017-10-07 09:44:11,137 INFO  KMSWebApp - Initialized KeyProvider 
> CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Default key bitlength is 128
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - KMS Started
> 2017-10-07 09:44:11,141 INFO  PackagesResourceConfig - Scanning for root 
> resource and provider classes in the packages:
>   org.apache.hadoop.crypto.key.kms.server
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Root resource classes 
> found:
>   class org.apache.hadoop.crypto.key.kms.server.KMS
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Provider classes found:
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter
>   class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider
>   class 

[jira] [Updated] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14949:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.0 and branch-2.

Thanks for the reviews [~ajisakaa]!

> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HADOOP-14949.01.patch, HADOOP-14949.02.patch, 
> HADOOP-14949.03.patch
>
>
> We have seen some intermittent failures of this test:
> Error Message
> {noformat}
> java.lang.AssertionError
> {noformat}
> Stack Trace
> {noformat}java.lang.AssertionError: Should not have been able to 
> reencryptEncryptedKey
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1616)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1608)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:313)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:97)
> {noformat}
> Standard Output
> {noformat}
> 2017-10-07 09:44:11,112 INFO  log - jetty-6.1.26.cloudera.4
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   Java runtime version : 
> 1.7.0_121-b00
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   User: slave
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   KMS Hadoop Version: 
> 2.6.0-cdh5.14.0-SNAPSHOT
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'CREATE' ACL 'CREATE,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'DELETE' ACL 'DELETE'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'ROLLOVER' ACL 
> 'ROLLOVER,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'GET' ACL 'GET'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_KEYS' ACL 'GET_KEYS'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_METADATA' ACL 'GET_METADATA'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL 
> 'SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GENERATE_EEK' ACL 'GENERATE_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'DECRYPT_EEK' ACL 'DECRYPT_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k0' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k1' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,136 INFO  KMSAudit - No audit logger configured, using 
> default.
> 2017-10-07 09:44:11,137 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2017-10-07 09:44:11,137 INFO  KMSWebApp - Initialized KeyProvider 
> CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Default key bitlength is 128
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - KMS Started
> 2017-10-07 09:44:11,141 INFO  PackagesResourceConfig - Scanning for root 
> resource and provider classes in the packages:
>   org.apache.hadoop.crypto.key.kms.server
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Root resource classes 
> found:
>   class org.apache.hadoop.crypto.key.kms.server.KMS
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Provider classes found:
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter
>   class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONReader
> 2017-10-07 09:44:11,147 INFO  WebApplicationImpl - Initiating Jersey 
> application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
> 2017-10-07 09:44:11,224 INFO  log - Started SocketConnector@localhost:46764
> Test KMS running at: http://localhost:46764/kms
> 2017-10-07 09:44:11,254 INFO  kms-audit - UNAUTHORIZED[op=CREATE_KEY, key=k, 

[jira] [Commented] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206611#comment-16206611
 ] 

Xiao Chen commented on HADOOP-14949:


Committing this based on Akira's +1.

> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14949.01.patch, HADOOP-14949.02.patch, 
> HADOOP-14949.03.patch
>
>
> We have seen some intermittent failures of this test:
> Error Message
> {noformat}
> java.lang.AssertionError
> {noformat}
> Stack Trace
> {noformat}java.lang.AssertionError: Should not have been able to 
> reencryptEncryptedKey
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1616)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1608)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:313)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:97)
> {noformat}
> Standard Output
> {noformat}
> 2017-10-07 09:44:11,112 INFO  log - jetty-6.1.26.cloudera.4
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   Java runtime version : 
> 1.7.0_121-b00
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   User: slave
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   KMS Hadoop Version: 
> 2.6.0-cdh5.14.0-SNAPSHOT
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'CREATE' ACL 'CREATE,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'DELETE' ACL 'DELETE'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'ROLLOVER' ACL 
> 'ROLLOVER,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'GET' ACL 'GET'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_KEYS' ACL 'GET_KEYS'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_METADATA' ACL 'GET_METADATA'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL 
> 'SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GENERATE_EEK' ACL 'GENERATE_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'DECRYPT_EEK' ACL 'DECRYPT_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k0' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k1' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,136 INFO  KMSAudit - No audit logger configured, using 
> default.
> 2017-10-07 09:44:11,137 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2017-10-07 09:44:11,137 INFO  KMSWebApp - Initialized KeyProvider 
> CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Default key bitlength is 128
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - KMS Started
> 2017-10-07 09:44:11,141 INFO  PackagesResourceConfig - Scanning for root 
> resource and provider classes in the packages:
>   org.apache.hadoop.crypto.key.kms.server
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Root resource classes 
> found:
>   class org.apache.hadoop.crypto.key.kms.server.KMS
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Provider classes found:
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter
>   class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONReader
> 2017-10-07 09:44:11,147 INFO  WebApplicationImpl - Initiating Jersey 
> application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
> 2017-10-07 09:44:11,224 INFO  log - Started SocketConnector@localhost:46764
> Test KMS running at: http://localhost:46764/kms
> 2017-10-07 09:44:11,254 INFO  kms-audit - UNAUTHORIZED[op=CREATE_KEY, key=k, 
> user=client] 
> 2017-10-07 09:44:11,255 WARN  KMS - User cli...@example.com (auth:KERBEROS) 
> request POST http://localhost:46764/kms/v1/keys caused exception.
> 2017-10-07 09:44:11,256 WARN  

[jira] [Commented] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206603#comment-16206603
 ] 

Hanisha Koneru commented on HADOOP-14954:
-

Thanks for the fix, [~jzhuge].
LGTM.. +1 (non-binding).

> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206582#comment-16206582
 ] 

Misha Dmitriev commented on HADOOP-14938:
-

Ok, uploaded patches for both branch-3.0 and branch-2


> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Attachment: HADOOP-14938.branch-2.01.patch

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-2.01.patch, 
> HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Attachment: HADOOP-14938.branch-3.0.01.patch

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Status: Patch Available  (was: In Progress)

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch, HADOOP-14938.branch-3.0.01.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14954:

Attachment: HADOOP-14954.001.patch

Patch 001
* Move {{++refCount}} to the beginning of {{MetricsSystemImpl#init}}

> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14954:

Status: Patch Available  (was: Open)

> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14954:

Description: 
{{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
{{shutdown}}.
{code:java}
  public synchronized MetricsSystem init(String prefix) {
if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
  LOG.warn(this.prefix +" metrics system already initialized!");
  return this;
}
this.prefix = checkNotNull(prefix, "prefix");
++refCount;
{code}

Move {{++refCount}}  to the beginning of this method.

  was:{{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
{{shutdown}}.


> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14938 started by Misha Dmitriev.
---
> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14954:
---

 Summary: MetricsSystemImpl#init should increment refCount when 
already initialized
 Key: HADOOP-14954
 URL: https://issues.apache.org/jira/browse/HADOOP-14954
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.0
Reporter: John Zhuge
Priority: Minor


{{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
{{shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14938) Configuration.updatingResource map should be initialized lazily

2017-10-16 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HADOOP-14938:

Status: Open  (was: Patch Available)

> Configuration.updatingResource map should be initialized lazily
> ---
>
> Key: HADOOP-14938
> URL: https://issues.apache.org/jira/browse/HADOOP-14938
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HADOOP-14938.01.patch, HADOOP-14938.02.patch, 
> HADOOP-14938.03.patch
>
>
> Using jxray (www.jxray.com), I've analyzed a heap dump of YARN RM running in 
> a big cluster. The tool uncovered several inefficiencies in the RM memory. It 
> turns out that one of the biggest sources of memory waste, responsible for 
> almost 1/4 of used memory, is empty ConcurrentHashMap instances in 
> org.apache.hadoop.conf.Configuration.updatingResource:
> {code}
> 905,551K (24.0%): java.util.concurrent.ConcurrentHashMap: 22118 / 100% of 
> empty 905,551K (24.0%)
> ↖org.apache.hadoop.conf.Configuration.updatingResource
> ↖{j.u.WeakHashMap}.keys
> ↖Java Static org.apache.hadoop.conf.Configuration.REGISTRY
> {code}
> That is, there are 22118 empty ConcurrentHashMaps here, and they collectively 
> waste ~905MB of memory. This is caused by eager initialization of these maps. 
> To address this problem, we should initialize them lazily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14953) MetricsSystemImpl should consistently check minicluster mode

2017-10-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HADOOP-14953:
---

Assignee: Bharat Viswanadham

> MetricsSystemImpl should consistently check minicluster mode
> 
>
> Key: HADOOP-14953
> URL: https://issues.apache.org/jira/browse/HADOOP-14953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> Found this when writing some tests related to JvmMetrics.
> It appears {{JvmMetrics.initSingleton}} twice in minicluster works, but 
> {{JvmMetrics.create}} twice doesn't.
> This jira suggests to investigate whether this is intentional, and likely 
> make the check of {{DefaultMetricsSystem.inMiniClusterMode()}} consistent in 
> {{MetricsSystemImpl}} to ease testing.
> {noformat}
> org.apache.hadoop.metrics2.MetricsException: Metrics source JvmMetrics 
> already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
> at 
> org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:95)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10307) Support multiple Authentication mechanisms for HTTP

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10307:
--
Resolution: Won't Do
Status: Resolved  (was: Patch Available)

Cleaning up jiras which is not relevant anymore.


> Support multiple Authentication mechanisms for HTTP
> ---
>
> Key: HADOOP-10307
> URL: https://issues.apache.org/jira/browse/HADOOP-10307
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10307.patch, HADOOP-10307.patch, 
> HADOOP-10307.patch
>
>
> Currently it is possible to specify a custom Authentication Handler  for HTTP 
> authentication.  
> We have a requirement to support multiple mechanisms  to authenticate HTTP 
> access.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9296) Authenticating users from different realm without a trust relationship

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-9296.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Authenticating users from different realm without a trust relationship
> --
>
> Key: HADOOP-9296
> URL: https://issues.apache.org/jira/browse/HADOOP-9296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9296-1.1.patch, HADOOP-9296.patch, 
> HADOOP-9296.patch, multirealm.pdf
>
>
> Hadoop Masters (JobTracker and NameNode) and slaves (Data Node and 
> TaskTracker) are part of the Hadoop domain, controlled by Hadoop Active 
> Directory. 
> The users belong to the CORP domain, controlled by the CORP Active Directory. 
> In the absence of a one way trust from HADOOP DOMAIN to CORP DOMAIN, how will 
> Hadoop Servers (JobTracker, NameNode) authenticate  CORP users ?
> The solution and implementation details are in the attachement



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10057) Add ability in Hadoop servers (Namenode, JobTracker, Datanode ) to support multiple QOP (Authentication , Privacy) simultaneously

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-10057.
---
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Add ability in Hadoop servers (Namenode, JobTracker, Datanode ) to support 
> multiple QOP  (Authentication , Privacy) simultaneously
> --
>
> Key: HADOOP-10057
> URL: https://issues.apache.org/jira/browse/HADOOP-10057
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10057.pdf, hadoop-10057-branch-1.2.patch
>
>
> Add ability in Hadoop servers (Namenode, JobTracker Datanode ) to support 
> multiple QOP  (Authentication , Privacy) simlutaneously
> Hadoop Servers currently support only one QOP(quality of protection)for the 
> whole cluster.
> We want Hadoop servers to support multiple QOP  at the same time. 
> The logic used to determine the QOP should be pluggable.
> This will enable hadoop servers to communicate with different types of 
> clients with different QOP.
> A sample usecase:
> Let each Hadoop server support two QOP .
> 1. Authentication
> 2. Privacy (Privacy includes Authentication) .
> The Hadoop servers and internal clients require to do Authentication only 
> without incurring cost of encryption. External clients use Privacy. 
> An ip-whitelist logic to determine the QOP is provided and used as the 
> default QOP resolution logic.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9939) Custom Processing for Errors

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-9939.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> Custom Processing for Errors
> 
>
> Key: HADOOP-9939
> URL: https://issues.apache.org/jira/browse/HADOOP-9939
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: hadoop-9939.patch
>
>   Original Estimate: 20h
>  Remaining Estimate: 20h
>
> We have a use case where we want to display different error message and take 
> some bookkeeping actions when there is authentication failure in Hadoop.
> There could be  other error cases where we want to associate custom actions 
> message.
> The work  is define a framework to attach custom error processors as part of 
> exception handling . Use that framework to display custom error message for 
> authentication failures.
> Please review and let me know your comments or alternatives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.

2017-10-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony resolved HADOOP-8923.
--
Resolution: Won't Do

Cleaning up jiras which is not relevant anymore.

> WEBUI shows an intermediatory page when the cookie expires.
> ---
>
> Key: HADOOP-8923
> URL: https://issues.apache.org/jira/browse/HADOOP-8923
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-8923.patch
>
>
> The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. 
> Once the cookie expires, the webui displays a page saying that 
> "authentication token expired". The user has to refresh the page to get 
> authenticated again. This page can be avoided and the user can authenticated 
> without showing such a page to the user.
> Also the when the cookie expires, a warning is logged. But there is no need 
> to log this as this is not of any significance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206436#comment-16206436
 ] 

Hadoop QA commented on HADOOP-14949:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HADOOP-14949 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892415/HADOOP-14949.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dbe851d408dc 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21bc855 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13520/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13520/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>  

[jira] [Updated] (HADOOP-14953) MetricsSystemImpl should consistently check minicluster mode

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14953:
---
Target Version/s: 3.0.0  (was: 2.9.0)

> MetricsSystemImpl should consistently check minicluster mode
> 
>
> Key: HADOOP-14953
> URL: https://issues.apache.org/jira/browse/HADOOP-14953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Priority: Minor
>
> Found this when writing some tests related to JvmMetrics.
> It appears {{JvmMetrics.initSingleton}} twice in minicluster works, but 
> {{JvmMetrics.create}} twice doesn't.
> This jira suggests to investigate whether this is intentional, and likely 
> make the check of {{DefaultMetricsSystem.inMiniClusterMode()}} consistent in 
> {{MetricsSystemImpl}} to ease testing.
> {noformat}
> org.apache.hadoop.metrics2.MetricsException: Metrics source JvmMetrics 
> already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
> at 
> org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:95)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-14953) MetricsSystemImpl should consistently check minicluster mode

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen moved HDFS-12668 to HADOOP-14953:
---

Target Version/s:   (was: 2.9.0)
 Component/s: (was: test)
  test
 Key: HADOOP-14953  (was: HDFS-12668)
 Project: Hadoop Common  (was: Hadoop HDFS)

> MetricsSystemImpl should consistently check minicluster mode
> 
>
> Key: HADOOP-14953
> URL: https://issues.apache.org/jira/browse/HADOOP-14953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Priority: Minor
>
> Found this when writing some tests related to JvmMetrics.
> It appears {{JvmMetrics.initSingleton}} twice in minicluster works, but 
> {{JvmMetrics.create}} twice doesn't.
> This jira suggests to investigate whether this is intentional, and likely 
> make the check of {{DefaultMetricsSystem.inMiniClusterMode()}} consistent in 
> {{MetricsSystemImpl}} to ease testing.
> {noformat}
> org.apache.hadoop.metrics2.MetricsException: Metrics source JvmMetrics 
> already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
> at 
> org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:95)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14953) MetricsSystemImpl should consistently check minicluster mode

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14953:
---
Target Version/s: 2.9.0

> MetricsSystemImpl should consistently check minicluster mode
> 
>
> Key: HADOOP-14953
> URL: https://issues.apache.org/jira/browse/HADOOP-14953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Priority: Minor
>
> Found this when writing some tests related to JvmMetrics.
> It appears {{JvmMetrics.initSingleton}} twice in minicluster works, but 
> {{JvmMetrics.create}} twice doesn't.
> This jira suggests to investigate whether this is intentional, and likely 
> make the check of {{DefaultMetricsSystem.inMiniClusterMode()}} consistent in 
> {{MetricsSystemImpl}} to ease testing.
> {noformat}
> org.apache.hadoop.metrics2.MetricsException: Metrics source JvmMetrics 
> already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
> at 
> org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:95)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206382#comment-16206382
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 17s{color} | {color:orange} root: The patch generated 39 new + 102 unchanged 
- 12 fixed = 141 total (was 114) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common-project_hadoop-kms generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m  
9s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}210m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HADOOP-14951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892391/0001-HADOOP-14951-Make-the-KMSACLs-implementation-customi.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 69cd91fb222b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-14950) har file system throws ArrayIndexOutOfBoundsException

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206272#comment-16206272
 ] 

Wei-Chiu Chuang commented on HADOOP-14950:
--

A correctly formatted _index fille for a har containg a 1MB file foobar file 
should contain the following:
{noformat}
%2F dir 1508173657482+1023+hdfs+supergroup 0 0 foobar
%2Ffoobar file part-0 0 1048576 1508173512707+420+hdfs+supergroup
{noformat}

> har file system throws ArrayIndexOutOfBoundsException
> -
>
> Key: HADOOP-14950
> URL: https://issues.apache.org/jira/browse/HADOOP-14950
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.9.2
>Reporter: Wei-Chiu Chuang
>  Labels: newbie
>
> When listing a har file system file, it throws an AIOOBE like the following:
> {noformat}
> $ hdfs dfs -ls har:///abc.har
> -ls: Fatal internal error
> java.lang.ArrayIndexOutOfBoundsException: 1
> at org.apache.hadoop.fs.HarFileSystem$HarStatus.(HarFileSystem.java:597)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098)
> at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
> at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
> at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Checking the code, it looks like the _index file in the har is mal-formed. It 
> expects two string separately by a space in each line, and this AIOOBE is 
> possible if the second string does not exist.
> File this jira to improve the error handling of such case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14950) har file system throws ArrayIndexOutOfBoundsException

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206269#comment-16206269
 ] 

Wei-Chiu Chuang commented on HADOOP-14950:
--

Unassign myself. Feel free to reassign. This should be a pretty easy fix for 
anyone to take. For more information, take a look at Hadoop Archive doc: 
https://hadoop.apache.org/docs/current/hadoop-archives/HadoopArchives.html

> har file system throws ArrayIndexOutOfBoundsException
> -
>
> Key: HADOOP-14950
> URL: https://issues.apache.org/jira/browse/HADOOP-14950
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.9.2
>Reporter: Wei-Chiu Chuang
>  Labels: newbie
>
> When listing a har file system file, it throws an AIOOBE like the following:
> {noformat}
> $ hdfs dfs -ls har:///abc.har
> -ls: Fatal internal error
> java.lang.ArrayIndexOutOfBoundsException: 1
> at org.apache.hadoop.fs.HarFileSystem$HarStatus.(HarFileSystem.java:597)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098)
> at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
> at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
> at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Checking the code, it looks like the _index file in the har is mal-formed. It 
> expects two string separately by a space in each line, and this AIOOBE is 
> possible if the second string does not exist.
> File this jira to improve the error handling of such case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14949) TestKMS#testACLs fails intermittently

2017-10-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14949:
---
Attachment: HADOOP-14949.03.patch

Good catch Akira. Reduced method scope in patch 3.

> TestKMS#testACLs fails intermittently
> -
>
> Key: HADOOP-14949
> URL: https://issues.apache.org/jira/browse/HADOOP-14949
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14949.01.patch, HADOOP-14949.02.patch, 
> HADOOP-14949.03.patch
>
>
> We have seen some intermittent failures of this test:
> Error Message
> {noformat}
> java.lang.AssertionError
> {noformat}
> Stack Trace
> {noformat}java.lang.AssertionError: Should not have been able to 
> reencryptEncryptedKey
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1616)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$11$15.run(TestKMS.java:1608)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:313)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:97)
> {noformat}
> Standard Output
> {noformat}
> 2017-10-07 09:44:11,112 INFO  log - jetty-6.1.26.cloudera.4
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   Java runtime version : 
> 1.7.0_121-b00
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   User: slave
> 2017-10-07 09:44:11,131 INFO  KMSWebApp -   KMS Hadoop Version: 
> 2.6.0-cdh5.14.0-SNAPSHOT
> 2017-10-07 09:44:11,131 INFO  KMSWebApp - 
> -
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'CREATE' ACL 'CREATE,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'DELETE' ACL 'DELETE'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'ROLLOVER' ACL 
> 'ROLLOVER,SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,134 INFO  KMSACLs - 'GET' ACL 'GET'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_KEYS' ACL 'GET_KEYS'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GET_METADATA' ACL 'GET_METADATA'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL 
> 'SET_KEY_MATERIAL'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'GENERATE_EEK' ACL 'GENERATE_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - 'DECRYPT_EEK' ACL 'DECRYPT_EEK'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k0' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,135 INFO  KMSACLs - KEY_NAME 'k1' KEY_OP 'ALL' ACL '*'
> 2017-10-07 09:44:11,136 INFO  KMSAudit - No audit logger configured, using 
> default.
> 2017-10-07 09:44:11,137 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2017-10-07 09:44:11,137 INFO  KMSWebApp - Initialized KeyProvider 
> CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/tmp/run_tha_testUYG3Cl/hadoop-common-project/hadoop-kms/target/ddbffdf2-e7d8-4e75-982a-debebb227075/kms.keystore
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - Default key bitlength is 128
> 2017-10-07 09:44:11,138 INFO  KMSWebApp - KMS Started
> 2017-10-07 09:44:11,141 INFO  PackagesResourceConfig - Scanning for root 
> resource and provider classes in the packages:
>   org.apache.hadoop.crypto.key.kms.server
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Root resource classes 
> found:
>   class org.apache.hadoop.crypto.key.kms.server.KMS
> 2017-10-07 09:44:11,146 INFO  ScanningResourceConfig - Provider classes found:
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONWriter
>   class org.apache.hadoop.crypto.key.kms.server.KMSExceptionsProvider
>   class org.apache.hadoop.crypto.key.kms.server.KMSJSONReader
> 2017-10-07 09:44:11,147 INFO  WebApplicationImpl - Initiating Jersey 
> application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
> 2017-10-07 09:44:11,224 INFO  log - Started SocketConnector@localhost:46764
> Test KMS running at: http://localhost:46764/kms
> 2017-10-07 09:44:11,254 INFO  kms-audit - UNAUTHORIZED[op=CREATE_KEY, key=k, 
> user=client] 
> 2017-10-07 09:44:11,255 WARN  KMS - User cli...@example.com (auth:KERBEROS) 
> request POST http://localhost:46764/kms/v1/keys caused exception.
> 2017-10-07 

[jira] [Assigned] (HADOOP-14950) har file system throws ArrayIndexOutOfBoundsException

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-14950:


Assignee: (was: Wei-Chiu Chuang)

> har file system throws ArrayIndexOutOfBoundsException
> -
>
> Key: HADOOP-14950
> URL: https://issues.apache.org/jira/browse/HADOOP-14950
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.9.2
>Reporter: Wei-Chiu Chuang
>  Labels: newbie
>
> When listing a har file system file, it throws an AIOOBE like the following:
> {noformat}
> $ hdfs dfs -ls har:///abc.har
> -ls: Fatal internal error
> java.lang.ArrayIndexOutOfBoundsException: 1
> at org.apache.hadoop.fs.HarFileSystem$HarStatus.(HarFileSystem.java:597)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098)
> at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
> at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
> at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Checking the code, it looks like the _index file in the har is mal-formed. It 
> expects two string separately by a space in each line, and this AIOOBE is 
> possible if the second string does not exist.
> File this jira to improve the error handling of such case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14950) har file system throws ArrayIndexOutOfBoundsException

2017-10-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14950:
-
Labels: newbie  (was: )

> har file system throws ArrayIndexOutOfBoundsException
> -
>
> Key: HADOOP-14950
> URL: https://issues.apache.org/jira/browse/HADOOP-14950
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CDH 5.9.2
>Reporter: Wei-Chiu Chuang
>  Labels: newbie
>
> When listing a har file system file, it throws an AIOOBE like the following:
> {noformat}
> $ hdfs dfs -ls har:///abc.har
> -ls: Fatal internal error
> java.lang.ArrayIndexOutOfBoundsException: 1
> at org.apache.hadoop.fs.HarFileSystem$HarStatus.(HarFileSystem.java:597)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201)
> at 
> org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098)
> at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
> at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
> at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Checking the code, it looks like the _index file in the har is mal-formed. It 
> expects two string separately by a space in each line, and this AIOOBE is 
> possible if the second string does not exist.
> File this jira to improve the error handling of such case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206174#comment-16206174
 ] 

Hadoop QA commented on HADOOP-14880:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-common-project: The patch generated 2 new 
+ 18 unchanged - 0 fixed = 20 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HADOOP-14880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892388/HADOOP-14880-1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 337f69d804a1 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Updated] (HADOOP-14952) Catalina use of hadoop-client throws ClassNotFoundException for jersey

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-14952:
-
Summary: Catalina use of hadoop-client throws ClassNotFoundException for 
jersey   (was: Newest hadoop-client throws ClassNotFoundException)

> Catalina use of hadoop-client throws ClassNotFoundException for jersey 
> ---
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206162#comment-16206162
 ] 

Sean Busbey commented on HADOOP-14952:
--

Are you using Jersey in your own app? I don't see anything in that stacktrace 
to indicate Hadoop is requesting the class?

> Newest hadoop-client throws ClassNotFoundException
> --
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-14952:
-
Affects Version/s: 3.0.0-beta1

> Newest hadoop-client throws ClassNotFoundException
> --
>
> Key: HADOOP-14952
> URL: https://issues.apache.org/jira/browse/HADOOP-14952
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Kamil
>
> I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
> fine, but recently had problems with CGLIB (was conflicting with Spring).
> I decided to try version 3.0.0-beta1 but server didn't start with exception:
> {code}
> 16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
> org.apache.catalina.core.ContainerBase.addChildInternal 
> ContainerBase.addChild: start:
>  org.apache.catalina.LifecycleException: Failed to start component 
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: 
> com/sun/jersey/api/core/DefaultResourceConfig
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
> at 
> org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
> at 
> org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
> at 
> org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
> at 
> org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
> at 
> org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
> at 
> org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.jersey.api.core.DefaultResourceConfig
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
> at 
> org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
> ... 21 more
> {code}
> after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
> started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14935) Azure: POSIX permissions are taking effect in access() method even when authorization is enabled

2017-10-16 Thread Santhosh G Nayak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206063#comment-16206063
 ] 

Santhosh G Nayak commented on HADOOP-14935:
---

Thanks [~ste...@apache.org] for reviewing and committing the patch.

> Azure: POSIX permissions are taking effect in access() method even when 
> authorization is enabled
> 
>
> Key: HADOOP-14935
> URL: https://issues.apache.org/jira/browse/HADOOP-14935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.1.0
>
> Attachments: HADOOP-14935-003.patch, HADOOP-14935-004.patch, 
> HADOOP-14935-005.patch, HADOOP-14935.1.patch, HADOOP-14935.2.patch
>
>
> FileSystem implementation class for azure i.e. {{NativeAzureFileSystem}} does 
> not override {{access(path,mode)}} method and uses the default implementation 
> from the base class. This base implementaion uses the POSIX permissions to 
> check if the requested user has access to given path or not even when 
> authorization is enabled, which is incorrect.
> {{NativeAzureFileSystem.access()}} in authorization enabled mode should use 
> the authorization mechanism provided instead of relying on the POSIX 
> permission ons. So the proposal is to override {{FileSystem.access()}} method 
> in {{NativeAzureFileSystem}} such that it honors the authorization mechanism 
> configured in authorization enabled mode and falls back to POSIX permissions 
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14935) Azure: POSIX permissions are taking effect in access() method even when authorization is enabled

2017-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206058#comment-16206058
 ] 

Hudson commented on HADOOP-14935:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13088 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13088/])
HADOOP-14935. Azure: POSIX permissions are taking effect in access() (stevel: 
rev 9fcc3a1fc8cab873034f5c308ceb2d5671a954e8)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/Constants.java


> Azure: POSIX permissions are taking effect in access() method even when 
> authorization is enabled
> 
>
> Key: HADOOP-14935
> URL: https://issues.apache.org/jira/browse/HADOOP-14935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.1.0
>
> Attachments: HADOOP-14935-003.patch, HADOOP-14935-004.patch, 
> HADOOP-14935-005.patch, HADOOP-14935.1.patch, HADOOP-14935.2.patch
>
>
> FileSystem implementation class for azure i.e. {{NativeAzureFileSystem}} does 
> not override {{access(path,mode)}} method and uses the default implementation 
> from the base class. This base implementaion uses the POSIX permissions to 
> check if the requested user has access to given path or not even when 
> authorization is enabled, which is incorrect.
> {{NativeAzureFileSystem.access()}} in authorization enabled mode should use 
> the authorization mechanism provided instead of relying on the POSIX 
> permission ons. So the proposal is to override {{FileSystem.access()}} method 
> in {{NativeAzureFileSystem}} such that it honors the authorization mechanism 
> configured in authorization enabled mode and falls back to POSIX permissions 
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14952) Newest hadoop-client throws ClassNotFoundException

2017-10-16 Thread Kamil (JIRA)
Kamil created HADOOP-14952:
--

 Summary: Newest hadoop-client throws ClassNotFoundException
 Key: HADOOP-14952
 URL: https://issues.apache.org/jira/browse/HADOOP-14952
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kamil


I was using org.apache.hadoop:hadoop-client in version 2.7.4 and it worked 
fine, but recently had problems with CGLIB (was conflicting with Spring).
I decided to try version 3.0.0-beta1 but server didn't start with exception:
{code}
16-Oct-2017 10:27:12.918 SEVERE [localhost-startStop-1] 
org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: 
start:
 org.apache.catalina.LifecycleException: Failed to start component 
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:158)
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
at 
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at 
org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1107)
at 
org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1841)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: 
com/sun/jersey/api/core/DefaultResourceConfig
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.catalina.startup.WebappServiceLoader.loadServices(WebappServiceLoader.java:188)
at 
org.apache.catalina.startup.WebappServiceLoader.load(WebappServiceLoader.java:159)
at 
org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1611)
at 
org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1131)
at 
org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:771)
at 
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:298)
at 
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
at 
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5092)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
... 10 more
Caused by: java.lang.ClassNotFoundException: 
com.sun.jersey.api.core.DefaultResourceConfig
at 
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1299)
at 
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1133)
... 21 more
{code}

after adding com.sun.jersey:jersey-server:1.9.1 to my dependencies the server 
started, but I think that it should be already included in your dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14935) Azure: POSIX permissions are taking effect in access() method even when authorization is enabled

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14935:

   Resolution: Fixed
Fix Version/s: 3.1.0
   2.9.0
   Status: Resolved  (was: Patch Available)

+1, committed the stripped down patch  005, which culls the permissions checks 
in getFileStatus() entirely; applied to branch-2 and trunk.  This should be 
enough for Hive to be happy without adding any extra complexity and options 
into the FS client. 

> Azure: POSIX permissions are taking effect in access() method even when 
> authorization is enabled
> 
>
> Key: HADOOP-14935
> URL: https://issues.apache.org/jira/browse/HADOOP-14935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.1.0
>
> Attachments: HADOOP-14935-003.patch, HADOOP-14935-004.patch, 
> HADOOP-14935-005.patch, HADOOP-14935.1.patch, HADOOP-14935.2.patch
>
>
> FileSystem implementation class for azure i.e. {{NativeAzureFileSystem}} does 
> not override {{access(path,mode)}} method and uses the default implementation 
> from the base class. This base implementaion uses the POSIX permissions to 
> check if the requested user has access to given path or not even when 
> authorization is enabled, which is incorrect.
> {{NativeAzureFileSystem.access()}} in authorization enabled mode should use 
> the authorization mechanism provided instead of relying on the POSIX 
> permission ons. So the proposal is to override {{FileSystem.access()}} method 
> in {{NativeAzureFileSystem}} such that it honors the authorization mechanism 
> configured in authorization enabled mode and falls back to POSIX permissions 
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14951) KMSACL implementation is not configurable

2017-10-16 Thread Zsombor Gegesy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsombor Gegesy updated HADOOP-14951:

Status: Patch Available  (was: Open)

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>  Labels: key-management, kms
> Attachments: 
> 0001-HADOOP-14951-Make-the-KMSACLs-implementation-customi.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14951) KMSACL implementation is not configurable

2017-10-16 Thread Zsombor Gegesy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsombor Gegesy updated HADOOP-14951:

Attachment: 0001-HADOOP-14951-Make-the-KMSACLs-implementation-customi.patch

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>  Labels: key-management, kms
> Attachments: 
> 0001-HADOOP-14951-Make-the-KMSACLs-implementation-customi.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14951) KMSACL implementation is not configurable

2017-10-16 Thread Zsombor Gegesy (JIRA)
Zsombor Gegesy created HADOOP-14951:
---

 Summary: KMSACL implementation is not configurable
 Key: HADOOP-14951
 URL: https://issues.apache.org/jira/browse/HADOOP-14951
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Reporter: Zsombor Gegesy


Currently, it is not possible to customize KMS's key management, if KMSACLs 
behaviour is not enough. If an external key management solution is used, that 
would need a higher level API, where it can decide, if the given operation is 
allowed, or not.
 For this to achieve, it would be a solution, to introduce a new interface, 
which could be implemented by KMSACLs - and also other KMS - and a new 
configuration point could be added, where the actual interface implementation 
could be specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206025#comment-16206025
 ] 

Gabor Bota edited comment on HADOOP-14880 at 10/16/17 2:57 PM:
---

Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the config in TestLoadBalancingKMSClientProvider.



was (Author: gabor.bota):
Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the parameter in TestLoadBalancingKMSClientProvider.


> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the parameter in TestLoadBalancingKMSClientProvider.


> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-1.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206010#comment-16206010
 ] 

Ewan Higgs commented on HADOOP-13786:
-

{quote}I should add that AFAIK Ewan has been testing the "magic" committer, not 
the staging ones; his store is naturally consistent.{quote} 
Yes.

{quote}What I'd like to suggest here is we create a branch for the S3Guard 
phase II work (HADOOP-14825), make this the first commit & then work on the 
s3guard II improvements above it. {quote}
+1

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205970#comment-16205970
 ] 

Steve Loughran commented on HADOOP-13786:
-

Thanks (I should add that AFAIK Ewan has been testing the "magic" committer, 
not the staging ones; his store is naturally consistent.

What I'd like to suggest here is we create a branch for the S3Guard phase II 
work (HADOOP-14825), make this the first commit & then work on the s3guard II 
improvements above it. That way: those of us working on S3 things have time to 
use all of this code before making the leap to say "ready for trunk", and we 
can avoid the problem of other patches to S3A conflicting with this one.

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14880:
---

Assignee: Gabor Bota

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205884#comment-16205884
 ] 

Ewan Higgs commented on HADOOP-13786:
-

I've been testing this mostly from a performance point of view using Hadoop MR2 
using {{NullMetadataStore}} and I'm pretty happy with the results. It's indeed 
twice as fast as the old style {{FileOutputCommitter}} on the system I used.

There's a lot of code here and it's been moving quite quickly but it's in 
overall good shape, imo. As I'm using a {{NullMetadataStore}} a lot of the 
possible error scenarios won't popup for me so it will be great if people can 
cover those areas.

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205866#comment-16205866
 ] 

Steve Loughran commented on HADOOP-13786:
-

# I've just purged all the Yetus reviews to keep this patch smaller; I'll do 
another patch with retry logic added for a 500 response from s3/dynamo, in both 
cases assuming that a call can be re-issued. That is, even for something we 
don't consider idempotent (GET,...), if the server sent 500 back, you can still 
try again. 

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 49 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 23 new + 166 unchanged 
- 30 fixed = 189 total (was 196) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 46 unchanged - 2 fixed = 46 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-tools_hadoop-aws generated 5 new + 0 unchanged 
- 0 fixed = 5 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 43 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 50 new + 158 unchanged 
- 24 fixed = 208 total (was 182) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 46 new + 121 unchanged 
- 24 fixed = 167 total (was 145) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Format-string method String.format(String, Object[]) called with format 
string 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
51s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 46 new + 120 unchanged 
- 23 fixed = 166 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
23s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13786 |
| JIRA Patch URL | 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-13345 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
19s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
8s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 41 new + 121 unchanged 
- 24 fixed = 162 total (was 145) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 30 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-aws in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 16s{color} 
| {color:red} hadoop-mapreduce-client-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 49s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 41 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common in HADOOP-13345 
has 19 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m  2s{color} 
| {color:red} root generated 1 new + 787 unchanged - 1 fixed = 788 total (was 
788) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 48 new + 121 unchanged 
- 23 fixed = 169 total (was 144) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 44s{color} | 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
37s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 46s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 30 new + 120 unchanged 
- 22 fixed = 150 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
51s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 41 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
38s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
51s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 36s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 43 new + 120 unchanged 
- 23 fixed = 163 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m  
5s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 44s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 54s{color} | {color:orange} root: The patch generated 40 new + 120 unchanged 
- 22 fixed = 160 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
3s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
10s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 50s{color} 
| {color:red} root generated 14 new + 777 unchanged - 1 fixed = 791 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 93 new + 97 unchanged 
- 14 fixed = 190 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 17 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Dead store to range in 
org.apache.hadoop.fs.s3a.S3AFileSystem$WriteOperationHelper.newUploadPartRequest(String,
 int, int, InputStream, File, Long)  At 
S3AFileSystem.java:org.apache.hadoop.fs.s3a.S3AFileSystem$WriteOperationHelper.newUploadPartRequest(String,
 int, int, InputStream, File, Long)  At S3AFileSystem.java:[line 2994] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HADOOP-13786 |
| JIRA Patch URL | 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
7s{color} | {color:red} Docker failed to build yetus/hadoop:612578f. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13786 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868602/HADOOP-13786-HADOOP-13345-028.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12348/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> cloud-intergration-test-failure.log, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 7s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
3s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
52s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 18s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 51 new + 100 unchanged 
- 23 fixed = 151 total (was 123) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HADOOP-13786 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868340/HADOOP-13786-HADOOP-13345-027.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2e8e45e81351 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 

[jira] [Issue Comment Deleted] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
 2s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m  0s{color} 
| {color:red} root generated 12 new + 777 unchanged - 1 fixed = 789 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 101 new + 100 
unchanged - 23 fixed = 201 total (was 123) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 21 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

  1   2   >