[jira] [Assigned] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-14188:
--

Assignee: Ewan Higgs  (was: Akira Ajisaka)

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch, 
> HADOOP-14188.09.patch, HADOOP-14188.10.patch, HADOOP-14188.11.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453540#comment-16453540
 ] 

Aaron Fabbri commented on HADOOP-13649:
---

Hey thanks for the patch [~gabor.bota]. Should have more time to review this 
tomorrow.

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453374#comment-16453374
 ] 

Akira Ajisaka commented on HADOOP-14188:


Thank you for rebasing, [~ehiggs]. Would you replace the usages in the 
following classes as well? I'm +1 if that is addressed.
{noformat}
$ find . -name "*.java" | xargs grep "import 
org.mockito.internal.util.reflection.Whitebox;"
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java:import
 org.mockito.internal.util.reflection.Whitebox;
./hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java:import
 org.mockito.internal.util.reflection.Whitebox;
./hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java:import
 org.mockito.internal.util.reflection.Whitebox;
./hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKSMMetrcis.java:import
 org.mockito.internal.util.reflection.Whitebox;
{noformat}

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch, 
> HADOOP-14188.09.patch, HADOOP-14188.10.patch, HADOOP-14188.11.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453531#comment-16453531
 ] 

Hudson commented on HADOOP-15411:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14065 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14065/])
HADOOP-15411. AuthenticationFilter should use (wangda: rev 
3559d8b1dacf5cf207424de37cb6ba8865d26ffe)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java


> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-04-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-10783:
--

Assignee: (was: Steve Loughran)

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Dmitry Sivachenko
>Priority: Major
> Attachments: commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15411:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed branch-3.0/3.1/trunk, thanks [~suma.shivaprasad].

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Critical
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451714#comment-16451714
 ] 

genericqa commented on HADOOP-15411:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 37m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HADOOP-15411 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920549/HADOOP-15411.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux db6dfda6efb1 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bb3c504 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14519/testReport/ |
| Max. process+thread count | 1467 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14519/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AuthenticationFilter should use 

[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451796#comment-16451796
 ] 

genericqa commented on HADOOP-15408:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 26m  5s{color} 
| {color:red} root generated 5 new + 1276 unchanged - 0 fixed = 1281 total (was 
1276) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 106 unchanged - 0 fixed = 107 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HADOOP-15408 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920580/split.prelim.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f4f9acf4047c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bb3c504 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| javac | 

[jira] [Updated] (HADOOP-15402) Prevent double logout of UGI's LoginContext

2018-04-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15402:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~daryn], [~bharatviswa], and [~tasanuma0829]!

> Prevent double logout of UGI's LoginContext
> ---
>
> Key: HADOOP-15402
> URL: https://issues.apache.org/jira/browse/HADOOP-15402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15402.patch
>
>
> HADOOP-15294 worked around a LoginContext NPE resulting from a double logout 
> by peering into the Subject.  A cleaner fix is tracking whether the 
> LoginContext is logged in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA
Pablo San José created HADOOP-15412:
---

 Summary: Hadoop KMS with HDFS keystore: No FileSystem for scheme 
"hdfs"
 Key: HADOOP-15412
 URL: https://issues.apache.org/jira/browse/HADOOP-15412
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.9.0, 2.7.2
 Environment: RHEL 7.3

Hadoop 2.7.2 and 2.7.9

 
Reporter: Pablo San José


I have been trying to configure the Hadoop kms to use hdfs as the key provider 
but it seems that this functionality is failing. 

I followed the Hadoop docs for that matter, and I added the following field to 
my kms-site.xml:
{code:java}
 
   hadoop.kms.key.provider.uri
   jceks://h...@nn1.example.com/kms/test.jceks 

  URI of the backing KeyProvider for the KMS. 

{code}
That route exists in hdfs, and I expect the kms to create the file test.jceks 
for its keystore. However, the kms failed to start due to this error:
{code:java}
ERROR: Hadoop KMS could not be started REASON: 
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
"hdfs" Stacktrace: --- 
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
"hdfs" at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
 at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
 at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
 at 
org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96) 
at 
org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
 at 
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
 at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803) 
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) at 
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) at 
org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003) 
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
 at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
 
For what I could manage to understand, it seems that this error is because 
there is no FileSystem implemented for HDFS. I have looked up this error but it 
always refers to a lack of jars for the hdfs-client when upgrading, which I 
have not done (it is a fresh installation). I have tested it using Hadoop 2.7.2 
and 2.9.0

Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15404) Remove multibyte characters in DataNodeUsageReportUtil

2018-04-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15404:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

+1, committed to trunk. Thanks [~tasanuma0829] for the contribution and thanks 
[~arpitagarwal] for the review.

> Remove multibyte characters in DataNodeUsageReportUtil
> --
>
> Key: HADOOP-15404
> URL: https://issues.apache.org/jira/browse/HADOOP-15404
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15404.1.patch
>
>
> DataNodeUsageReportUtil created by HDFS-13055 includes multibyte characters. 
> We need to remove them for building it with java9.
> {noformat}
> mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> ...
> [ERROR] 
> /hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/DataNodeUsageReportUtil.java:26:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]  * the delta between??current DataNode usage metrics and the 
> usage metrics
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15404) Remove multibyte characters in DataNodeUsageReportUtil

2018-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451856#comment-16451856
 ] 

Hudson commented on HADOOP-15404:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14062 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14062/])
HADOOP-15404. Remove multibyte characters in DataNodeUsageReportUtil (aajisaka: 
rev 1bd44becb0bb2577e68becc5d3abc647a68ef895)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/DataNodeUsageReportUtil.java


> Remove multibyte characters in DataNodeUsageReportUtil
> --
>
> Key: HADOOP-15404
> URL: https://issues.apache.org/jira/browse/HADOOP-15404
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15404.1.patch
>
>
> DataNodeUsageReportUtil created by HDFS-13055 includes multibyte characters. 
> We need to remove them for building it with java9.
> {noformat}
> mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> ...
> [ERROR] 
> /hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/DataNodeUsageReportUtil.java:26:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]  * the delta between??current DataNode usage metrics and the 
> usage metrics
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15402) Prevent double logout of UGI's LoginContext

2018-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451857#comment-16451857
 ] 

Hudson commented on HADOOP-15402:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14062 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14062/])
HADOOP-15402. Prevent double logout of UGI's LoginContext (aajisaka: rev 
69e1e6aee6de79586d4c25486b7d51477972cd83)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Prevent double logout of UGI's LoginContext
> ---
>
> Key: HADOOP-15402
> URL: https://issues.apache.org/jira/browse/HADOOP-15402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15402.patch
>
>
> HADOOP-15294 worked around a LoginContext NPE resulting from a double logout 
> by peering into the Subject.  A cleaner fix is tracking whether the 
> LoginContext is logged in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15395) DefaultImpersonationProvider fails to parse proxy user config if username has . in it

2018-04-25 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452006#comment-16452006
 ] 

Mukul Kumar Singh commented on HADOOP-15395:


Thanks for working on this [~ajayydv]. The patch looks good to me.
Can you please fix the checkstyle issues for the bug ?

> DefaultImpersonationProvider fails to parse proxy user config if username has 
> . in it
> -
>
> Key: HADOOP-15395
> URL: https://issues.apache.org/jira/browse/HADOOP-15395
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15395.00.patch
>
>
> DefaultImpersonationProvider fails to parse proxy user config if username has 
> . in it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452056#comment-16452056
 ] 

Pablo San José commented on HADOOP-15412:
-

Hi Wei-Chiu,

Thank you very much for your quick response. So I have misunderstood the 
documentation. I thought the KMS could use any of the providers present in the 
provider type section of the Credential Provider API docs: 
[https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Provider_Types]

I understood that the HDFS NameNode was only consulted when an access to an 
encryption zone was requested because the metadata stored in the NameNode only 
contains the EDEK for the files in an encryption zone. I thought that, because 
the key provider is already encrypted by the KMS, it could be in a 
non-encrypted zone of HDFS. 

This case was great to have the KMS in HA  because they could share the key 
provider and be configured very easily.

Thank you again for your help.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452021#comment-16452021
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Hi Pablo, thanks for filing the issue.

What you mention is not a valid use case. KMS can't use HDFS as the backing 
storage. As you could imagine, if HDFS is used for KMS, then each HDFS client 
file access would go through HDFS NameNode --> KMS --> HDFS NameNode --> KMS 


The file based KMS can use keystore files on the local file system. 

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it 

[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks

2018-04-25 Thread Voyta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452034#comment-16452034
 ] 

Voyta commented on HADOOP-15392:


[~mackrorysd] It’s not caused by MapReduce job, but by hbase ExportSnapshot 
utility.

Here is our finding so far: 
 * We call the following command: 
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot 
-Dfs.s3a.buffer.dir=./s3a-buffer-dir -snapshot  -copy-to 
s3a: -copy-from 
hdfs://:8020/hbase -chuser hbase -chgroup hbase 
-bandwidth 20 

 * We have around 50,000 files exported per snapshot 
 * org.apache.hadoop.hbase.snapshot.ExportSnapshot class uses FileSystem 
instances that are S3AFileSystem instances in our case 
 * An S3AFileSystem instance calls in initialization phase 
S3AInstrumentation#registerAsMetricsSource() that creates new unique metric 
name (because of counter _metricsSourceActiveCounter_) 
 * An S3AInstrumentation instance is registered in MetricsSystemImpl by calling 
MetricsSystemImpl#registerSource(String name, String desc, MetricsSource 
source) 

 * The S3AInstrumentation instance is the source argument 

 * There is a single instance org.apache.hadoop.metrics2.impl.MetricsSystemImpl 
instantiated by org.apache.hadoop.fs.s3a.S3AInstrumentation#getMetricsSystem() 
and the MetricsSystemImpl instance holds all references in 
private final Map sources 

So far, we don’t see any place where we can switch this off or avoid the 
accumulation by a configurable option. It might, however, be related to 
https://issues.apache.org/jira/browse/HBASE-20433 

Our observation for 1 GB max heap setting after out of memory heap dump was 
obtained is the following: 
 * approx. 53,000 S3AInstrumentation instances and its referenced instances 
(e.g. a MetricRegistry instance) 
 * approx. 53,000 long[] instances referenced by SampleQuantiles (216 MB size) 
 * approx. 2,700,000 org.apache.hadoop.metrics2.lib.MutableCounterLong 
instances (92 MB size) 
 * approx. 2,700,000 javax.management.MBeanAttributeInfo instances (140 MB 
size) 
 * approx. 2,700,000 javax.management.Attribute instances (87 MB) 
 * approx. 4,000,000 HashMap$Entry instances (170 MB) 
 * majority of memory is occupied by metric-related instances (approx. 90%) 

> S3A Metrics in S3AInstrumentation Cause Memory Leaks
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Major
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15404) Remove multibyte characters in DataNodeUsageReportUtil

2018-04-25 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451972#comment-16451972
 ] 

Takanobu Asanuma commented on HADOOP-15404:
---

Thanks [~ajisakaa], thnaks again, [~arpitagarwal].

> Remove multibyte characters in DataNodeUsageReportUtil
> --
>
> Key: HADOOP-15404
> URL: https://issues.apache.org/jira/browse/HADOOP-15404
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15404.1.patch
>
>
> DataNodeUsageReportUtil created by HDFS-13055 includes multibyte characters. 
> We need to remove them for building it with java9.
> {noformat}
> mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> ...
> [ERROR] 
> /hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/DataNodeUsageReportUtil.java:26:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]  * the delta between??current DataNode usage metrics and the 
> usage metrics
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452123#comment-16452123
 ] 

Pablo San José commented on HADOOP-15412:
-

Yes, I want to implement KMS-HA using HDFS for storing keystore.

As you said, this solution may be confronting the separation of duty design 
principle. However, If I understand how the KMS works correctly, an HDFS admin 
could access the keystore but, because the key provider is encrypted by the KMS 
and only KMS could decrypt the contents of it, the admin wouldn't be able to 
decrypt anything in the cluster.

The problem I am facing trying to configure KMS in HA is that the KMS doesn't 
manage the replication of the data in the keystore. So, for example, if two 
instances of KMS are deployed, the client could be configured so if a request 
to a KMS instance fails, clients retry with the next instance, but the data of 
the two KMS keystore would be different if you use a local filesystem. The only 
solution I could think is using a shared filesystem for the KMS instances, 
which may be fine enough, but if the HA algorithm is something like round 
robin, there could be locking problems in the concurrency trying if the 
instances try to access the keystore at the same time.

As you said, KMS HA is not an easy task at all.

Thank you very much for your comments and your help.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> 

[jira] [Commented] (HADOOP-15409) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-04-25 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452067#comment-16452067
 ] 

lqjack commented on HADOOP-15409:
-

https://github.com/apache/hadoop/pull/367

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15409
> URL: https://issues.apache.org/jira/browse/HADOOP-15409
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2018-04-25 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452076#comment-16452076
 ] 

Ewan Higgs commented on HADOOP-14394:
-

Not sure if this is an ok place to ask or if I should take it to the dev list, 
but I would like to create a mock for {{class 
FileSystemDataOutputStreamBuilder}} but it's marked final. Is there a strong 
reason why this is a final class? Otherwise, I just remove the final marker. 
Thanks!

> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch, HADOOP-14394.04.patch, 
> HADOOP-14394.05.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452084#comment-16452084
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

If I understand it correctly, you wanted to implement KMS-HA using HDFS for 
storing keystore?

While I can conceive that as a simple & quick solution, it makes little sense 
to store keystore in an unencrypted HDFS cluster. It also violates the initial 
design principal – separation of duty. With the keystore in non-EZ, A hdfs 
admin can easily decrypt anything in the cluster, voiding the need of KMS.

 

KMS HA is not a trivial task. Please consult this doc for reference: 
https://hadoop.apache.org/docs/current/hadoop-kms/index.html#High_Availability

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> 

[jira] [Commented] (HADOOP-15409) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452066#comment-16452066
 ] 

ASF GitHub Bot commented on HADOOP-15409:
-

GitHub user lqjack opened a pull request:

https://github.com/apache/hadoop/pull/367

HADOOP-15409

change doesBucketExistV2 to verified the acl

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lqjack/hadoop HADOOP-15409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #367


commit 0c8af1b07cc9cde68f07fa84d88c191ae5aec6d8
Author: lqjaclee 
Date:   2018-04-25T11:07:15Z

HADOOP-15409

change doesBucketExistV2 to verified the acl




> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15409
> URL: https://issues.apache.org/jira/browse/HADOOP-15409
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452153#comment-16452153
 ] 

Steve Loughran commented on HADOOP-15407:
-

[~fabbri]: [~chris.douglas] & I will be proposing a branch for this to be 
pulled in, so allow more detailed review & testing before merge into trunk.

But yes, a big patch. If you look close, a lot of it is machine generated, so 
that can be glanced at but not worried about in detail (what can you do, change 
the code generator?).



> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452219#comment-16452219
 ] 

Steve Loughran commented on HADOOP-15392:
-

wow. Yes, we need to fix this. If someone can write a patch for this I'll 
review it.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Major
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15392:

Priority: Blocker  (was: Major)
Target Version/s: 3.1.1

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-14188:

Status: Open  (was: Patch Available)

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch, 
> HADOOP-14188.09.patch, HADOOP-14188.10.patch, HADOOP-14188.11.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-14188:

Attachment: HADOOP-14188.11.patch

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch, 
> HADOOP-14188.09.patch, HADOOP-14188.10.patch, HADOOP-14188.11.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-14188:

Status: Patch Available  (was: Open)

Attaching 11 patch which rebases 10 onto current trunk HEAD 
(626690612cd0957316628376744a8be62f891665)

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch, 
> HADOOP-14188.09.patch, HADOOP-14188.10.patch, HADOOP-14188.11.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15392:

Summary: S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase 
Export  (was: S3A Metrics in S3AInstrumentation Cause Memory Leaks)

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Major
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13649 started by Gabor Bota.
---
> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452326#comment-16452326
 ] 

Sean Mackrory commented on HADOOP-15392:


{quote}MapReduce job, but by hbase ExportSnapshot utility{quote}
{quote}It might, however, be related to 
https://issues.apache.org/jira/browse/HBASE-20433 {quote}

Yeah that's what I meant - ExportSnapshot is essentially a MapReduce job. I do 
see it closing the filesystem instances towards the end of doWork()

{quote}Yes, we need to fix this{quote}

Well let's make sure we're fixing the right problem first. 53,000 
S3Ainstrumentation instances means S3AFileSystem.initialize is getting called 
once for every single file - that's also a lot of overhead that doesn't seem 
right to me. Has filesystem caching been disabled for some reason? And can you 
clarify what's configured in hadoop-metrics2.properties? I was testing with a 
much lower number of large files - but the threads I saw growing unbounded 
already only show up if you explicitly configure sinks for the s3a-file-system 
metrics. I'll try with a large number of files and verify that this 
accumulation is happening in threads that do exist without explicitly enabling 
them.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452326#comment-16452326
 ] 

Sean Mackrory edited comment on HADOOP-15392 at 4/25/18 2:20 PM:
-

{quote}MapReduce job, but by hbase ExportSnapshot utility{quote}
{quote}It might, however, be related to 
https://issues.apache.org/jira/browse/HBASE-20433 {quote}

Yeah that's what I meant - ExportSnapshot is essentially a MapReduce job. I do 
see it closing the filesystem instances towards the end of doWork(), fwiw. It's 
reasonable to assume those FS instances should be open for the whole duration 
of the job - so the fix most likely lives at the FS level here, even if it's 
not just disabling metrics by default.

{quote}Yes, we need to fix this{quote}

Well let's make sure we're fixing the right problem first. 53,000 
S3Ainstrumentation instances means S3AFileSystem.initialize is getting called 
once for every single file - that's also a lot of overhead that doesn't seem 
right to me. Has filesystem caching been disabled for some reason? And can you 
clarify what's configured in hadoop-metrics2.properties? I was testing with a 
much lower number of large files - but the threads I saw growing unbounded 
already only show up if you explicitly configure sinks for the s3a-file-system 
metrics. I'll try with a large number of files and verify that this 
accumulation is happening in threads that do exist without explicitly enabling 
them.


was (Author: mackrorysd):
{quote}MapReduce job, but by hbase ExportSnapshot utility{quote}
{quote}It might, however, be related to 
https://issues.apache.org/jira/browse/HBASE-20433 {quote}

Yeah that's what I meant - ExportSnapshot is essentially a MapReduce job. I do 
see it closing the filesystem instances towards the end of doWork()

{quote}Yes, we need to fix this{quote}

Well let's make sure we're fixing the right problem first. 53,000 
S3Ainstrumentation instances means S3AFileSystem.initialize is getting called 
once for every single file - that's also a lot of overhead that doesn't seem 
right to me. Has filesystem caching been disabled for some reason? And can you 
clarify what's configured in hadoop-metrics2.properties? I was testing with a 
much lower number of large files - but the threads I saw growing unbounded 
already only show up if you explicitly configure sinks for the s3a-file-system 
metrics. I'll try with a large number of files and verify that this 
accumulation is happening in threads that do exist without explicitly enabling 
them.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Voyta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452342#comment-16452342
 ] 

Voyta commented on HADOOP-15392:


[~mackrorysd] I was trying to locate hadoop-metrics2.properties but all files I 
found contain only commented lines. So I'd assume there is no metric 
configuration.

bq. ExportSnapshot is essentially a MapReduce job

If I observed it correctly it is a standalone Java process that launches 
multiple MapReduce jobs. The problem is in the standalone process.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-13649:

Status: Patch Available  (was: In Progress)

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452661#comment-16452661
 ] 

Sean Mackrory edited comment on HADOOP-15392 at 4/25/18 5:15 PM:
-

{quote}I found contain only commented lines{quote}

Okay - thanks for checking. I believe the default is all commented out, except 
*.period=10, but I wasn't seeing the MutableQuantiles thread or the metrics 
system thread start up with only that either.

The 53,000 instances is still weird. Is fs.s3a.impl.disable.cache set to true? 
It's not the default, so if you're not setting it, this shouldn't be happening. 
Within a JVM, communication with 1 S3 bucket should usually be done with a 
single, cached instance of S3AFileSystem, which should only yield a single 
S3AInstrumentation instance and thus the memory usage should 1/53,000th of what 
you're seeing.

{quote}The problem is in the standalone process.{quote}

Ah okay - thanks for clarifying. I've been ignoring that process in my own 
debugging, although the behavior I described should still be the same.


was (Author: mackrorysd):
{quote}I found contain only commented lines{quote}

Okay - thanks for checking. I believe the default is all commented out, except 
*.period=10, but I wasn't seeing the MutableQuantiles thread or the metrics 
system thread start up with only that either.

The 53,000 instances is still weird. Is fs.s3a.impl.disable.cache set to true? 
It's not the default, so if you're not setting it, this shouldn't be happening. 
Within a JVM, communication with 1 S3 bucket should usually be done with a 
single, cached instance of S3AFileSystem, which should only yield a single 
S3AInstrumentation instance and thus the memory usage should 1/53,000th of what 
you're seeing.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452496#comment-16452496
 ] 

Steve Loughran commented on HADOOP-14394:
-

Probably to stop people subclassing it as a sanity check. I have no issues with 
making it non-final, other than my usual unhappiness with mocking: 
https://www.slideshare.net/steve_l/i-hate-mocking

> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch, HADOOP-14394.04.patch, 
> HADOOP-14394.05.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452661#comment-16452661
 ] 

Sean Mackrory commented on HADOOP-15392:


{quote}I found contain only commented lines{quote}

Okay - thanks for checking. I believe the default is all commented out, except 
*.period=10, but I wasn't seeing the MutableQuantiles thread or the metrics 
system thread start up with only that either.

The 53,000 instances is still weird. Is fs.s3a.impl.disable.cache set to true? 
It's not the default, so if you're not setting it, this shouldn't be happening. 
Within a JVM, communication with 1 S3 bucket should usually be done with a 
single, cached instance of S3AFileSystem, which should only yield a single 
S3AInstrumentation instance and thus the memory usage should 1/53,000th of what 
you're seeing.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452519#comment-16452519
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

bq. Thanks for testing. The patch was to express the idea - seems it won't 
compile on trunk.
Thanks [~xiaochen] for the patch. I should have been more clear. I didn't test 
with your patch (split.patch).
{quote}
Identifier for both the tokens (i.e KMS_DELEGATION_TOKEN and kms-dt) are the 
same (byte-to-byte) so we don't need to have another class 
KMSLegacyDelegationTokenIdentifier for legacy token identifier.
{quote}
I have a different idea. I am uploading the patch (HADOOP-15408-001.patch) . 
Please don't think I am intruding.
Lets see if we agree on one approach.


> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452538#comment-16452538
 ] 

Xiao Chen commented on HADOOP-15408:


bq. Please don't think I am intruding.
not at all. More than happy to see you working on it.

bq. HADOOP-15408-001.patch
Could you elaborate? It seems if we do it this way, the new jar alone won't be 
able to decode kms-dt?

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452594#comment-16452594
 ] 

Xiao Chen commented on HADOOP-15408:


{noformat}
2018-04-20 21:09:53,273 ERROR [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.util.ServiceConfigurationError: 
org.apache.hadoop.security.token.TokenIdentifier: Provider 
org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
KMSLegacyDelegationTokenIdentifier could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
at org.apache.hadoop.security.token.Token.toString(Token.java:413)
at java.lang.String.valueOf(String.java:2994)
at 
org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
{noformat}
This isn't about renewal, but the decoding of token identifier. And because the 
new jar will meet kms-dt (e.g. rolling upgrade), we need to fix it... I think 
renewer is not affected here

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This 

[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452447#comment-16452447
 ] 

Gabor Bota commented on HADOOP-13649:
-

Comments on my 001 patch:
1. Removed the LruHasMap implementation
 - Using the com.google.common.cache.Cache
 - All tests pass
 - Loosing the mruGet method. I was not able to find any test for this 
particular feature,
or 'real world' usage besides that it will modify the LinkedList inside
the LinkedHashMap, so it will have the most recently modified elements
in the start of the List. I don't think that would cause big performance
issue in testing, but I've made a little benchmark to test this.[1]
 - Using basically the same implementation in LocalMetadataStore, the
new addition is only the Timed Eviction.
 - Of course I had to rename every hash to cache, and there were some other 
minor changes.

2. Created a test for the Timed Eviction (testCacheTimedEvictionAfterWrite)
 - Only in the TestLocalMetadataStore (not in the abstract test)
 - Just tests if the cache implementation inside LocalMetadataStore working
as expected, nothing more.
 - Maybe additional tests could be added for higher level functionality, but the
problem is that there will be a need from the test to change the default
cache inside the instance to another cache build with the support for a custom 
Ticker
to avoid realtime waiting for the eviction (which could cause flakyness).

3. Should the removal of the evicted elements be logged?
 - There is an option to log removed elements. It can be done synchronously 
(default),
and asynchronously as well. Using the synchronous way could be expensive, and 
I'm
not even sure if we want this (maybe good for debug) so I haven't included it 
in the
initial patch.

4. Maybe DEFAULT_EXPIRY_AFTER_WRITE_SECONDS should be 
DEFAULT_EVICTION_AFTER_WRITE_SECONDS

[1] The impact of having cache instead of LruHashMap in LocalMetadataStore on 
tests:
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
with patch: 4m10.917s
without patch: 4m12.413s
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean test
with patch: 4m6.383s
without patch: 4m5.217s

Test and verify runs on us-west-2 succesfully for the patch.

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15409) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452493#comment-16452493
 ] 

Steve Loughran commented on HADOOP-15409:
-

Looks good, but afraid you'll have to do a full test run & tell us which 
endpoint: 
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/testing.html#Policy_for_submitting_patches_which_affect_the_hadoop-aws_module.

I'll hit "submit patch" here for the jenkins run, which doesn't include the 
functional object store tests, I'm afraid.

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15409
> URL: https://issues.apache.org/jira/browse/HADOOP-15409
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15409) S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2

2018-04-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15409:

Target Version/s: 3.2.0
  Status: Patch Available  (was: Open)

> S3AFileSystem.verifyBucketExists to move to s3.doesBucketExistV2
> 
>
> Key: HADOOP-15409
> URL: https://issues.apache.org/jira/browse/HADOOP-15409
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> in S3AFileSystem.initialize(), we check for the bucket existing with 
> verifyBucketExists(), which calls s3.doesBucketExist(). But that doesn't 
> check for auth issues. 
> s3. doesBucketExistV2() does at least validate credentials, and should be 
> switched to. This will help things fail faster 
> See SPARK-24000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452625#comment-16452625
 ] 

genericqa commented on HADOOP-13649:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HADOOP-13649 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920630/HADOOP-13649.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8a64cc2f6783 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6266906 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14523/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14523/testReport/ |
| Max. process+thread count | 304 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14523/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452659#comment-16452659
 ] 

Steve Loughran commented on HADOOP-15392:
-

OK< it's in the launcher? And there are 53K instances of S3A FS there? This 
sounds like a new instance is being created every time rather than the cache 
being involved. 

There's more at stake than just metrics: each s3a instance creates a thread 
pool, which is v. expensive if the runtime starts allocating memory for each 
thread's stack, plus a pool of HTTP connections, which, if kept open, will use 
up a lot of TCP connections. What does netstat -a say on this machine?

Alternatively, if it's just the S3AInstrumentations which are hanging around, 
maybe there's some loop of ref counting going on?

I think we ought to look @ the hbase problem independently of the memory 
management of the metrics.

Voyta: thanks for your research here, it really helps us understand what's up. 
One thing, do make sure that fs.s3a.impl.disable.cache is either unset or 
false, as that would create lots of S3a instances.


> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-13649:

Attachment: HADOOP-13649.001.patch

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-15408:

Attachment: HADOOP-15408-trunk.001.patch

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452551#comment-16452551
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

bq. the new jar alone won't be able to decode kms-dt?
We do we want to decode in the first place ?
We just want TokenRenewer to renew the token and the kind field in Token class 
will be able to find the right TokenRenewer (i.e KMSLegacyTokenRenewer).

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452677#comment-16452677
 ] 

Steve Loughran commented on HADOOP-15392:
-

And of course, what do we see in ExportSnapshot [line 
792|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java#L942]
{code}
srcConf.setBoolean("fs." + inputRoot.toUri().getScheme() + 
".impl.disable.cache", true);
{code}

That is: the entry point process does disable fs caching in both src and dest. 
Now, it is cleaning them up in [Line 
1074|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java#L1074],
 but it's got me worried. It'd be better if they used 
FileSystem.newInstance(URI, config), so there'd be no altering of configs. Even 
so, I don't see from looking at the code where the other 52998 are coming from.


> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-04-25 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452447#comment-16452447
 ] 

Gabor Bota edited comment on HADOOP-13649 at 4/25/18 3:17 PM:
--

Comments on my 001 patch:
1. Removed the LruHasMap implementation
 - Using the com.google.common.cache.Cache
 - All tests pass
 - Loosing the mruGet method. I was not able to find any test for this 
particular feature, or 'real world' usage besides that it will modify the 
LinkedList inside the LinkedHashMap, so it will have the most recently modified 
elements in the start of the List. I don't think that would cause big 
performance issue in testing, but I've made a little benchmark to test this.[1]
 - Using basically the same implementation in LocalMetadataStore, the new 
addition is only the Timed Eviction.
 - Of course I had to rename every hash to cache, and there were some other 
minor changes.

2. Created a test for the Timed Eviction (testCacheTimedEvictionAfterWrite)
 - Only in the TestLocalMetadataStore (not in the abstract test)
 - Just tests if the cache implementation inside LocalMetadataStore working as 
expected, nothing more.
 - Maybe additional tests could be added for higher level functionality, but 
the problem is that there will be a need from the test to change the default 
cache inside the instance to another cache build with the support for a custom 
Ticker to avoid realtime waiting for the eviction (which could cause flakyness).

3. Should the removal of the evicted elements be logged?
 - There is an option to log removed elements. It can be done synchronously 
(default), and asynchronously as well. Using the synchronous way could be 
expensive, and I'm not even sure if we want this (maybe good for debug) so I 
haven't included it in the initial patch.

4. Maybe DEFAULT_EXPIRY_AFTER_WRITE_SECONDS should be 
DEFAULT_EVICTION_AFTER_WRITE_SECONDS

[1] The impact of having cache instead of LruHashMap in LocalMetadataStore on 
tests:
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
with patch: 4m10.917s
without patch: 4m12.413s
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean test
with patch: 4m6.383s
without patch: 4m5.217s

Test and verify runs on us-west-2 succesfully for the patch.


was (Author: gabor.bota):
Comments on my 001 patch:
1. Removed the LruHasMap implementation
 - Using the com.google.common.cache.Cache
 - All tests pass
 - Loosing the mruGet method. I was not able to find any test for this 
particular feature,
or 'real world' usage besides that it will modify the LinkedList inside
the LinkedHashMap, so it will have the most recently modified elements
in the start of the List. I don't think that would cause big performance
issue in testing, but I've made a little benchmark to test this.[1]
 - Using basically the same implementation in LocalMetadataStore, the
new addition is only the Timed Eviction.
 - Of course I had to rename every hash to cache, and there were some other 
minor changes.

2. Created a test for the Timed Eviction (testCacheTimedEvictionAfterWrite)
 - Only in the TestLocalMetadataStore (not in the abstract test)
 - Just tests if the cache implementation inside LocalMetadataStore working
as expected, nothing more.
 - Maybe additional tests could be added for higher level functionality, but the
problem is that there will be a need from the test to change the default
cache inside the instance to another cache build with the support for a custom 
Ticker
to avoid realtime waiting for the eviction (which could cause flakyness).

3. Should the removal of the evicted elements be logged?
 - There is an option to log removed elements. It can be done synchronously 
(default),
and asynchronously as well. Using the synchronous way could be expensive, and 
I'm
not even sure if we want this (maybe good for debug) so I haven't included it 
in the
initial patch.

4. Maybe DEFAULT_EXPIRY_AFTER_WRITE_SECONDS should be 
DEFAULT_EVICTION_AFTER_WRITE_SECONDS

[1] The impact of having cache instead of LruHashMap in LocalMetadataStore on 
tests:
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
with patch: 4m10.917s
without patch: 4m12.413s
 - mvn -Dparallel-tests -DtestsThreadCount=8 clean test
with patch: 4m6.383s
without patch: 4m5.217s

Test and verify runs on us-west-2 succesfully for the patch.

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch
>
>
> LocalMetadataStore is primarily a reference 

[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452744#comment-16452744
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

{quote}And because the new jar will meet kms-dt (e.g. rolling upgrade), we need 
to fix it.
{quote}
My bad. Ignore my patch then.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14951) KMSACL implementation is not configurable

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-14951:


Assignee: Zsombor Gegesy

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452820#comment-16452820
 ] 

Wei-Chiu Chuang commented on HADOOP-14951:
--

Thanks for raising the issue [~zsombor]. I've added you to contributor list and 
assigned the Jira to you.

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452879#comment-16452879
 ] 

Wangda Tan commented on HADOOP-15411:
-

+1, thanks [~suma.shivaprasad], will commit by today if no objections.

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Blocker
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452800#comment-16452800
 ] 

genericqa commented on HADOOP-15408:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 93 unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
45s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HADOOP-15408 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920636/HADOOP-15408-trunk.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2837a7f657f5 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6266906 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15411:

Priority: Critical  (was: Blocker)
Target Version/s: 3.2.0, 3.1.1, 3.0.3

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452967#comment-16452967
 ] 

Xiao Chen commented on HADOOP-15408:


No worries Rushabh. Would you be able to test whether the split approach helps 
on the failing spark job?

FWIW I don't care about the ownership of jiras etc., so feel free to grab it 
and carry on. (As explained I don't have cycles this week to dive into this at 
real spark jobs, and would like to see this fixed asap, as I'm sure you and 
Arpit do as well)

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15408:
---
Affects Version/s: (was: 2.8.4)
 Target Version/s: 2.10.0, 2.9.1, 2.8.4, 3.2.0, 3.1.1, 3.0.3

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2018-04-25 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452962#comment-16452962
 ] 

genericqa commented on HADOOP-14188:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 50 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 24s{color} 
| {color:red} root generated 180 new + 1276 unchanged - 0 fixed = 1456 total 
(was 1276) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} root: The patch generated 0 new + 956 unchanged - 1 
fixed = 956 total (was 957) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
43s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
59s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}272m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452992#comment-16452992
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

bq.  Would you be able to test whether the split approach helps on the failing 
spark job?
I am planning to put this patch in my internal build tonight and will request 
spark team to test it.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453096#comment-16453096
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Yeah... that's a problem. Even if you use a shared file system (like NFS?) you 
still need to make sure the network communication is authentication and 
encrypted.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453169#comment-16453169
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Filed HADOOP-15412 to get this documented. I'll close this Jira then.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HADOOP-15413) Document that KMS should not store keystore on HDFS

2018-04-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15413:


 Summary: Document that KMS should not store keystore on HDFS
 Key: HADOOP-15413
 URL: https://issues.apache.org/jira/browse/HADOOP-15413
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, kms
Reporter: Wei-Chiu Chuang


As pointed out in HADOOP-15412, KMS should store keystore in local file system, 
not in HDFS. Would be nice to capture that in KMS documentation 
(https://hadoop.apache.org/docs/current/hadoop-kms/index.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453144#comment-16453144
 ] 

Ted Yu commented on HADOOP-15392:
-

>From S3AInstrumentation :
{code}
  public void close() {
synchronized (metricsSystemLock) {
  metricsSystem.unregisterSource(metricsSourceName);
  int activeSources = --metricsSourceActiveCounter;
  if (activeSources == 0) {
metricsSystem.publishMetricsNow();
metricsSystem.shutdown();
metricsSystem = null;
{code}
How about adding a DEBUG log with the value of activeSources so that we know 
whether the {{activeSources == 0}} case is ever reached ?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453148#comment-16453148
 ] 

Sean Mackrory commented on HADOOP-15392:


[~te...@apache.org] I ran all of the s3a tests with such logging, and I checked 
that the ref counting stepped up and down exactly as expected. Tests that 
closed filesystems always reached 0 correctly - there was no skipping to -1 or 
getting stuck at 1.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453160#comment-16453160
 ] 

Ted Yu commented on HADOOP-15392:
-

I meant running ExportSnapshot with the DEBUG log.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453169#comment-16453169
 ] 

Wei-Chiu Chuang edited comment on HADOOP-15412 at 4/25/18 10:05 PM:


Filed HADOOP-15413 to get this documented. I'll close this Jira then.


was (Author: jojochuang):
Filed HADOOP-15412 to get this documented. I'll close this Jira then.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have 

[jira] [Resolved] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-15412.
--
Resolution: Won't Fix

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-04-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453176#comment-16453176
 ] 

Sean Mackrory commented on HADOOP-15392:


Ooh yeah that's a good idea. I'll try an export run with a bunch of 
instrumentation added to ExportSnapshot and S3A.

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453233#comment-16453233
 ] 

Xiao Chen commented on HADOOP-15408:


Thanks a lot!

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: HADOOP-15408-trunk.001.patch, split.patch, 
> split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org