[jira] [Updated] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15408:
---
Status: Patch Available  (was: Open)

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch, split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451674#comment-16451674
 ] 

Xiao Chen commented on HADOOP-15408:


Hi [~shahrs87], comments below and a request in the end.
Comments:
bq. Identifier for both the tokens are the same
Correct, and that's exactly why we can do the trick at KMSCP to copy a new kind.

bq. 4 tests are failing in TestKMS because they are trying to decodeIdentifier 
with kind kms-dt and Serviceloader is not able to find any Identifier which 
corresponds to kms-dt kind.
Thanks for testing. The patch was to express the idea - seems it won't compile 
on trunk. I have cleaned up compilation, and don't see any test failures. 
Attaching a new patch and we can run pre-commit to see. 

Back to the problem itself, which I think is another compatibility dimension as 
Steve mentioned, is that when old jars and new jars are both present, we should 
behave the same way as if only new jars or only old jars are present.

The specific issue, as you discovered in the description, is that service 
loader will load things in accumulation. This means 
{{org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSDelegationTokenIdentifier}}
 from both jars, and 
{{org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier}}
 from the new jar. I think this would trigger [service 
loader|https://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html] 
to ignore duplicates, so in the order you provided, we'll end up with token 
identifier from the old jar, and legacy token identifier from the new jar, 
which are both kms-dt. It seems then in the job, kms-dt mapped to the legacy 
token (new jar), which upon looking up the {{TOKEN_LEGACY_KIND}} field reached 
the old jar's {{KMSDelegationToken}} class. (I'm not totally sure but this 
seems to be the only explanation for the stacetrace)

So, in split.patch what I tried to do is to make sure:
# {{org.apache.hadoop.security.token.TokenIdentifier}} have different classes 
between old jar and new jar
# new jar's legacy token do not collide with old classes, and everything is 
self-contained (within {{KMSDelegationTokenLegacy.java}}).

I think this should make service loader happy, and no matter kms-dt at run time 
maps to {{KMSDelegationTokenIdentifier}} in the old jar, or 
{{KMSLegacyDelegationTokenIdentifier}} in the new jar, the identifier can be 
initialized.

Request:
Sorry I won't have cycles this week to try reproduce and debug on a real spark 
case. Since you have a failing Spark job already, do you think you can test 
this with the failing spark use case, and verify whether this fixes the issue? 
Would be great if [~arpitagarwal] could try on the Ranger case as well 
Thanks!

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch, split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> 

[jira] [Updated] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15408:
---
Attachment: split.prelim.patch

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch, split.prelim.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2018-04-24 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451603#comment-16451603
 ] 

lqjack commented on HADOOP-15410:
-

Thanks your advices. I have changed to provided. 

> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Priority: Major
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2

2018-04-24 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451569#comment-16451569
 ] 

SammiChen commented on HADOOP-15385:


Thanks [~jlowe] for the quick fix and [~djp] for the commit. I'm glad the issue 
limits to the test case, doesn't impact the code.

> Many tests are failing in hadoop-distcp project in branch-2
> ---
>
> Key: HADOOP-15385
> URL: https://issues.apache.org/jira/browse/HADOOP-15385
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.2
>Reporter: Rushabh S Shah
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 2.10.0, 2.8.4, 2.9.2
>
> Attachments: HADOOP-15385-branch-2.001.patch
>
>
> Many tests are failing in hadoop-distcp project in branch-2.8
> Below are the failing tests.
> {noformat}
> Failed tests: 
>   
> TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 
> expected:<4> but was:<5>
>   TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 
> expected:<4> but was:<5>
>   TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 
> expected:<2> but was:<3>
>   TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
> Tests run: 258, Failures: 16, Errors: 0, Skipped: 0
> {noformat}
> {noformat}
> rushabhs$ pwd
> /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp
> rushabhs$ git branch
>  branch-2
>   branch-2.7
> * branch-2.8
>   branch-2.9
>   branch-3.0
>  rushabhs$ git log --oneline | head -n3
> c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of 
> format option. Contributed by Erik Krogen
> 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically 
> fails. Contributed by Jason Lowe
> c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom 
> leveldb logger. Contributed by Jason Lowe.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2

2018-04-24 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-15385:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.9.2
  2.8.4
  2.10.0
Target Version/s: 2.10.0, 2.8.4, 2.9.2  (was: 2.10.0, 2.9.1, 2.8.4)
  Status: Resolved  (was: Patch Available)

I have commit the patch to branch-2, branch-2.9 and branch-2.8. Thanks 
[~shahrs87] for reporting the issue and [~jlowe] for delivering the fix and 
[~Sammi] for comments! I haven't commit it in 2.9.1 branch as leave it upto 
[~Sammi]'s decision given RC0 is out.

> Many tests are failing in hadoop-distcp project in branch-2
> ---
>
> Key: HADOOP-15385
> URL: https://issues.apache.org/jira/browse/HADOOP-15385
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.2
>Reporter: Rushabh S Shah
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 2.10.0, 2.8.4, 2.9.2
>
> Attachments: HADOOP-15385-branch-2.001.patch
>
>
> Many tests are failing in hadoop-distcp project in branch-2.8
> Below are the failing tests.
> {noformat}
> Failed tests: 
>   
> TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 
> expected:<4> but was:<5>
>   TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 
> expected:<4> but was:<5>
>   TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 
> expected:<2> but was:<3>
>   TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
>   TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 
> expected:<4> but was:<5>
>   TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 
> expected:<2> but was:<3>
>   TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 
> expected:<6> but was:<8>
> Tests run: 258, Failures: 16, Errors: 0, Skipped: 0
> {noformat}
> {noformat}
> rushabhs$ pwd
> /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp
> rushabhs$ git branch
>  branch-2
>   branch-2.7
> * branch-2.8
>   branch-2.9
>   branch-3.0
>  rushabhs$ git log --oneline | head -n3
> c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of 
> format option. Contributed by Erik Krogen
> 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically 
> fails. Contributed by Jason Lowe
> c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom 
> leveldb logger. Contributed by Jason Lowe.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451504#comment-16451504
 ] 

Rushabh S Shah edited comment on HADOOP-15408 at 4/25/18 1:44 AM:
--

Thanks [~xiaochen] for the patch.
Had an offline chat with [~daryn] on the proposed fix.
Below is the summary.
Identifier for both the tokens (i.e KMS_DELEGATION_TOKEN and kms-dt) are the 
same (byte-to-byte) so we don't need to have another class 
{{KMSLegacyDelegationTokenIdentifier}} for legacy token identifier.
 Kind in Identifier doesn't mean much.

After removing {{KMSLegacyDelegationTokenIdentifier}} class and 
{{KMSLegacyDelegationTokenIdentifier}} from 
{{org.apache.hadoop.security.token.TokenIdentifier}}, 4 tests are failing in 
TestKMS because they are trying to decodeIdentifier with kind {{kms-dt}} and 
Serviceloader is not able to find any Identifier which corresponds to 
{{kms-dt}} kind.
 Since it is test-only code, we can change the test.
 Let me know if this makes sense.


was (Author: shahrs87):
Thanks [~xiaochen] for the patch.
Had an offline chat with [~daryn] on the proposed fix.
Identifier for both the tokens (i.e KMS_DELEGATION_TOKEN and kms-dt) are the 
same (byte-to-byte) so we don't need to have another class 
{{KMSLegacyDelegationTokenIdentifier}} for legacy token identifier.
Kind in Identifier doesn't mean much.

After removing {{KMSLegacyDelegationTokenIdentifier}} class and 
{{KMSLegacyDelegationTokenIdentifier}} from 
{{org.apache.hadoop.security.token.TokenIdentifier}}, 4 tests are failing in 
TestKMS because they are trying to decodeIdentifier with kind {{kms-dt}} and 
Serviceloader is not able to find any Identifier which corresponds to 
{{kms-dt}} kind.
Since it is test-only code, we can change the test.
Let me know if this makes sense.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when 

[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451504#comment-16451504
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

Thanks [~xiaochen] for the patch.
Had an offline chat with [~daryn] on the proposed fix.
Identifier for both the tokens (i.e KMS_DELEGATION_TOKEN and kms-dt) are the 
same (byte-to-byte) so we don't need to have another class 
{{KMSLegacyDelegationTokenIdentifier}} for legacy token identifier.
Kind in Identifier doesn't mean much.

After removing {{KMSLegacyDelegationTokenIdentifier}} class and 
{{KMSLegacyDelegationTokenIdentifier}} from 
{{org.apache.hadoop.security.token.TokenIdentifier}}, 4 tests are failing in 
TestKMS because they are trying to decodeIdentifier with kind {{kms-dt}} and 
Serviceloader is not able to find any Identifier which corresponds to 
{{kms-dt}} kind.
Since it is test-only code, we can change the test.
Let me know if this makes sense.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-04-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451492#comment-16451492
 ] 

Aaron Fabbri commented on HADOOP-15407:
---

Wow this is a big patch. (Aside: We really need to move away from mega-patches 
IMO, it is antithetical to quality code reviews.) 
{quote}Third parties and customers have also done various testing of ABFS.
{quote}
Is there any specific reasons you didn't do this work with the Apache 
community?  If there are we should try to address them.

It is much easier for folks like me to digest if this is done as a series of 
smaller commits on a feature branch.  Do you have a clean commit history you 
can push to a public branch on github?
{quote} WASB is not deprecated but is in pure maintenance mode and customers 
should upgrade to ABFS once it hits General Availability later in CY18.
{quote}
Might want to add some caveats around that. ;)

 

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production 

[jira] [Commented] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service

2018-04-24 Thread Sergiy Matusevych (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451477#comment-16451477
 ] 

Sergiy Matusevych commented on HADOOP-15397:


[~zhangbutao] The change looks sane. Unfortunately, I cannot test it in my 
environment, but you have my approval anyway :)

> Failed to start the estimator of Resource Estimator Service
> ---
>
> Key: HADOOP-15397
> URL: https://issues.apache.org/jira/browse/HADOOP-15397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.9.0
>Reporter: zhangbutao
>Priority: Major
> Fix For: 2.9.0
>
> Attachments: HADOOP-15397-001.path, 
> HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch
>
>
> You would get the following log, if you statt the estmator using  script  
> start-estimator.sh;. And the estmator is not started.
> {code:java}
> starting resource estimator service
> starting estimator, logging to 
> /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out
> /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line 
> 47: bin/estimator.sh: No such file or directory{code}
> Fix the bug in the script estimator-daemon.sh.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15404) Remove multibyte characters in DataNodeUsageReportUtil

2018-04-24 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451464#comment-16451464
 ] 

Takanobu Asanuma commented on HADOOP-15404:
---

Thanks for reviewing it, [~arpitagarwal]! Could you also commit it?

> Remove multibyte characters in DataNodeUsageReportUtil
> --
>
> Key: HADOOP-15404
> URL: https://issues.apache.org/jira/browse/HADOOP-15404
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15404.1.patch
>
>
> DataNodeUsageReportUtil created by HDFS-13055 includes multibyte characters. 
> We need to remove them for building it with java9.
> {noformat}
> mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> ...
> [ERROR] 
> /hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/DataNodeUsageReportUtil.java:26:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]  * the delta between??current DataNode usage metrics and the 
> usage metrics
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated HADOOP-15411:
--
Attachment: HADOOP-15411.1.patch

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Blocker
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated HADOOP-15411:
--
Status: Patch Available  (was: Open)

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Blocker
> Attachments: HADOOP-15411.1.patch
>
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad reassigned HADOOP-15411:
-

Assignee: Suma Shivaprasad

> AuthenticationFilter should use Configuration.getPropsWithPrefix instead of 
> iterator
> 
>
> Key: HADOOP-15411
> URL: https://issues.apache.org/jira/browse/HADOOP-15411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Blocker
>
> Node manager start up fails with the following stack trace
> {code}
> 2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
> (NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
>  ... 5 more
> Caused by: java.io.IOException: java.util.ConcurrentModificationException
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
>  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
>  ... 8 more
> Caused by: java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
>  at 
> org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
>  at 
> org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
>  at 
> org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
>  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
>  ... 11 more
> 2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
> collector to send metrics to. Metrics to be sent will be discarded. This 
> message will be skipped for the next 20 times.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15411) AuthenticationFilter should use Configuration.getPropsWithPrefix instead of iterator

2018-04-24 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created HADOOP-15411:
-

 Summary: AuthenticationFilter should use 
Configuration.getPropsWithPrefix instead of iterator
 Key: HADOOP-15411
 URL: https://issues.apache.org/jira/browse/HADOOP-15411
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Suma Shivaprasad


Node manager start up fails with the following stack trace

{code}
2018-04-19 13:08:30,638 ERROR nodemanager.NodeManager 
(NodeManager.java:initAndStartNodeManager(921)) - Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
start.
 at 
org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:117)
 at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
 at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
 at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
 at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:919)
 at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:979)
Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
server
 at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:377)
 at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:424)
 at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:420)
 at 
org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
 ... 5 more
Caused by: java.io.IOException: java.util.ConcurrentModificationException
 at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:532)
 at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
 at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:421)
 at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:333)
 ... 8 more
Caused by: java.util.ConcurrentModificationException
 at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
 at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2853)
 at 
org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:73)
 at org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:647)
 at 
org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:637)
 at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:525)
 ... 11 more
2018-04-19 13:08:30,639 INFO timeline.HadoopTimelineMetricsSink 
(AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live 
collector to send metrics to. Metrics to be sent will be discarded. This 
message will be skipped for the next 20 times.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks

2018-04-24 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451243#comment-16451243
 ] 

Sean Mackrory commented on HADOOP-15392:


{quote}if you or someone else would care to provide a switch to turn this 
off{quote}

If there's a problem we can't fix I'd be happy to, but there shouldn't be any 
significant difference between how the relatively short-lived connector is 
accumulating metrics and how long-running daemons do it. If there's a memory 
leak then either we're doing it wrong and we should fix it (unless the fix is 
worse than disabling metrics by default or something), or this is a very big 
problem indeed that may be affecting other Hadoop daemons. I ran a profiler and 
saw 2 metrics-related threads pop up: one is an executor for MetricsSinkAdapter 
that tends to grow by about 6kb every period of the metrics system (which by 
default is 10 seconds) and there's another one for MutableQuantiles that grows 
by about 64 bytes per second. Now I don't see why either of those needs to grow 
indefinitely like that, but that rate also sounds insignificant compared to the 
kind of growth being described here, and the behavior is the same between the 
other Hadoop daemons that were unaffected by the change in question and S3a 
clients. It also doesn't appear to happen unless you have metrics sinks 
configured, etc. and the default configuration doesn't have any of that. I also 
ran a bunch of HBase snapshot exports to S3 both with and without this change 
and I'm not seeing any particular pattern in the memory usage that matches what 
you're describing. So a couple of follow-up questions:

*To be clear, exactly where are you seeing the growth in memory usage? As I 
understand it this is in the MapReduce job that exports snapshots to S3, right? 
If that's the case, can you identify a particular thread that seems to be 
accumulating all the memory? I initially thought you were referring to one of 
the HBase daemons, but if this is the MR job then whether or not it closes the 
FileSystem is probably rather academic because it would close the FileSystem as 
the JVM was about to shut down anyway, so it likely doesn't affect whether or 
not you see a problem with memory usage.
* Do you have anything in hadoop-metrics2.properties at all?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Major
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks

2018-04-24 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451243#comment-16451243
 ] 

Sean Mackrory edited comment on HADOOP-15392 at 4/24/18 9:25 PM:
-

{quote}if you or someone else would care to provide a switch to turn this 
off{quote}

If there's a problem we can't fix I'd be happy to, but there shouldn't be any 
significant difference between how the relatively short-lived connector is 
accumulating metrics and how long-running daemons do it. If there's a memory 
leak then either we're doing it wrong and we should fix it (unless the fix is 
worse than disabling metrics by default or something), or this is a very big 
problem indeed that may be affecting other Hadoop daemons. I ran a profiler and 
saw 2 metrics-related threads pop up: one is an executor for MetricsSinkAdapter 
that tends to grow by about 6kb every period of the metrics system (which by 
default is 10 seconds) and there's another one for MutableQuantiles that grows 
by about 64 bytes per second. Now I don't see why either of those needs to grow 
indefinitely like that, but that rate also sounds insignificant compared to the 
kind of growth being described here, and the behavior is the same between the 
other Hadoop daemons that were unaffected by the change in question and S3a 
clients. It also doesn't appear to happen unless you have metrics sinks 
configured, etc. and the default configuration doesn't have any of that. I also 
ran a bunch of HBase snapshot exports to S3 both with and without this change 
and I'm not seeing any particular pattern in the memory usage that matches what 
you're describing. So a couple of follow-up questions:

* To be clear, exactly where are you seeing the growth in memory usage? As I 
understand it this is in the MapReduce job that exports snapshots to S3, right? 
If that's the case, can you identify a particular thread that seems to be 
accumulating all the memory? I initially thought you were referring to one of 
the HBase daemons, but if this is the MR job then whether or not it closes the 
FileSystem is probably rather academic because it would close the FileSystem as 
the JVM was about to shut down anyway, so it likely doesn't affect whether or 
not you see a problem with memory usage.
* Do you have anything in hadoop-metrics2.properties at all?


was (Author: mackrorysd):
{quote}if you or someone else would care to provide a switch to turn this 
off{quote}

If there's a problem we can't fix I'd be happy to, but there shouldn't be any 
significant difference between how the relatively short-lived connector is 
accumulating metrics and how long-running daemons do it. If there's a memory 
leak then either we're doing it wrong and we should fix it (unless the fix is 
worse than disabling metrics by default or something), or this is a very big 
problem indeed that may be affecting other Hadoop daemons. I ran a profiler and 
saw 2 metrics-related threads pop up: one is an executor for MetricsSinkAdapter 
that tends to grow by about 6kb every period of the metrics system (which by 
default is 10 seconds) and there's another one for MutableQuantiles that grows 
by about 64 bytes per second. Now I don't see why either of those needs to grow 
indefinitely like that, but that rate also sounds insignificant compared to the 
kind of growth being described here, and the behavior is the same between the 
other Hadoop daemons that were unaffected by the change in question and S3a 
clients. It also doesn't appear to happen unless you have metrics sinks 
configured, etc. and the default configuration doesn't have any of that. I also 
ran a bunch of HBase snapshot exports to S3 both with and without this change 
and I'm not seeing any particular pattern in the memory usage that matches what 
you're describing. So a couple of follow-up questions:

*To be clear, exactly where are you seeing the growth in memory usage? As I 
understand it this is in the MapReduce job that exports snapshots to S3, right? 
If that's the case, can you identify a particular thread that seems to be 
accumulating all the memory? I initially thought you were referring to one of 
the HBase daemons, but if this is the MR job then whether or not it closes the 
FileSystem is probably rather academic because it would close the FileSystem as 
the JVM was about to shut down anyway, so it likely doesn't affect whether or 
not you see a problem with memory usage.
* Do you have anything in hadoop-metrics2.properties at all?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>

[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451179#comment-16451179
 ] 

Hudson commented on HADOOP-14652:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
Addendum patch to fix the merge issue:   HADOOP-14652 updated a (aengineer: rev 
61516762b678fe83ad77fdbde2d7ecffe2806deb)
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml


> Update metrics-core version to 3.2.4
> 
>
> Key: HADOOP-14652
> URL: https://issues.apache.org/jira/browse/HADOOP-14652
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, 
> HADOOP-14652.003.patch, HADOOP-14652.004.patch, HADOOP-14652.005.patch, 
> HADOOP-14652.006.patch
>
>
> The current artifact is:
> com.codehale.metrics:metrics-core:3.0.1
> That version could either be bumped to 3.0.2 (the latest of that line), or 
> use the latest artifact:
> io.dropwizard.metrics:metrics-core:3.2.4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451082#comment-16451082
 ] 

Hudson commented on HADOOP-14898:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HDFS-12656. Ozone: dozone: Use (proposed) base image from HADOOP-14898. (xyao: 
rev 005b72c6396dde8f0d3c39f7d1d8b627600efbf8)
* (edit) dev-support/compose/ozone/docker-compose.yaml
* (edit) dev-support/compose/ozone/docker-config


> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official source-code branches (branch-2, trunk, etc.) or we can create 
> separated branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0)
> For the first approach it's easier to find the docker images, but it's less 
> flexible. For example if we had a Dockerfile for on the source code it should 
> be used for every release (for example the Docker file from the 

[jira] [Commented] (HADOOP-14374) License error in GridmixTestUtils.java

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450697#comment-16450697
 ] 

Hudson commented on HADOOP-14374:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14374. License error in GridmixTestUtils.java. Contributed by (xyao: rev 
70303253843c326f985871d7790aacaab19a401c)
* (edit) 
hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/GridmixTestUtils.java


> License error in GridmixTestUtils.java
> --
>
> Key: HADOOP-14374
> URL: https://issues.apache.org/jira/browse/HADOOP-14374
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: lixinglong
>Assignee: lixinglong
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14374.001.patch, HADOOP-14374.002.patch, 
> HADOOP-14374.003.patch
>
>
> license is not at the top of the class. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451036#comment-16451036
 ] 

Hudson commented on HADOOP-14768:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14768. Honoring sticky bit during Deletion when authorization is (xyao: 
rev bb4e59b2959416a391457f0320a176db27b27067)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (delete) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemAuthorizationWithOwner.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFSAuthorizationCaching.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFSAuthWithBlobSpecificKeys.java


> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>Priority: Major
>  Labels: fs, secure, wasb
> Fix For: 2.9.0
>
> Attachments: HADOOP-14768-branch-2-008.patch, 
> HADOOP-14768-branch-2-009.patch, HADOOP-14768.001.patch, 
> HADOOP-14768.002.patch, HADOOP-14768.003.patch, HADOOP-14768.003.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.004.patch, HADOOP-14768.005.patch, 
> HADOOP-14768.006.patch, HADOOP-14768.007.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14095) Document caveats about the default JavaKeyStoreProvider in KMS

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451049#comment-16451049
 ] 

Hudson commented on HADOOP-14095:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14095. Document caveats about the default JavaKeyStoreProvider in (xyao: 
rev a1fe6175f350b3ff3e38a68214320985132cbb89)
* (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm


> Document caveats about the default JavaKeyStoreProvider in KMS
> --
>
> Key: HADOOP-14095
> URL: https://issues.apache.org/jira/browse/HADOOP-14095
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HADOOP-14095.01.patch, HADOOP-14095.02.patch
>
>
> KMS doc provides and example to use JavaKeyStoreProvider. But we should 
> document the caveats of using it and setting it up, specifically about 
> keystore passwords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14909) Fix the word of "erasure encoding" in the top page

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451045#comment-16451045
 ] 

Hudson commented on HADOOP-14909:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14909. Fix the word of erasure encoding in the top page. (xyao: rev 
8a213115aff0b9f274d79160ad9e27c53c7e2b01)
* (edit) hadoop-project/src/site/markdown/index.md.vm


> Fix the word of "erasure encoding" in the top page
> --
>
> Key: HADOOP-14909
> URL: https://issues.apache.org/jira/browse/HADOOP-14909
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HADOOP-14909.1.patch
>
>
> Since "erasure coding" is a more general word, we should use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14651) Update okhttp version to 2.7.5

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451044#comment-16451044
 ] 

Hudson commented on HADOOP-14651:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14651. Update okhttp version to 2.7.5. Contributed by Ray Chiang (xyao: 
rev 85a4f69b10a00b65c83706f2575e0c6b1e05f664)
* (edit) hadoop-tools/hadoop-azure-datalake/pom.xml
* (edit) hadoop-project/pom.xml


> Update okhttp version to 2.7.5
> --
>
> Key: HADOOP-14651
> URL: https://issues.apache.org/jira/browse/HADOOP-14651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-14651-branch-2.0.004.patch, 
> HADOOP-14651-branch-2.0.004.patch, HADOOP-14651-branch-3.0.004.patch, 
> HADOOP-14651-branch-3.0.004.patch, HADOOP-14651.001.patch, 
> HADOOP-14651.002.patch, HADOOP-14651.003.patch, HADOOP-14651.004.patch
>
>
> The current artifact is:
> com.squareup.okhttp:okhttp:2.4.0
> That version could either be bumped to 2.7.5 (the latest of that line), or 
> use the latest artifact:
> com.squareup.okhttp3:okhttp:3.8.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14915) method name is incorrect in ConfServlet

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451051#comment-16451051
 ] 

Hudson commented on HADOOP-14915:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14915. method name is incorrect in ConfServlet. Contributed by (xyao: 
rev 3bf591fd812e16ecf8c2237c974a360f7cfcb262)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java


> method name is incorrect in ConfServlet
> ---
>
> Key: HADOOP-14915
> URL: https://issues.apache.org/jira/browse/HADOOP-14915
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-14915.00.patch
>
>
> method name is parseAccecptHeader.
> Modify it as parseAcceptHeader



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14822) hadoop-project/pom.xml is executable

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451030#comment-16451030
 ] 

Hudson commented on HADOOP-14822:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14822. hadoop-project/pom.xml is executable. Contributed by Ajay (xyao: 
rev ddbe8c56129747efd2fda893e6bca83376c17e80)
* (edit) hadoop-project/pom.xml


> hadoop-project/pom.xml is executable
> 
>
> Key: HADOOP-14822
> URL: https://issues.apache.org/jira/browse/HADOOP-14822
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: HADOOP-14822.01.patch
>
>
> No need for pom.xml to be executable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451039#comment-16451039
 ] 

Hudson commented on HADOOP-14902:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14902. LoadGenerator#genFile write close timing is incorrectly (xyao: 
rev 16c6299088edfffe6d0a1c5d72498551256cfe86)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> LoadGenerator#genFile write close timing is incorrectly calculated
> --
>
> Key: HADOOP-14902
> URL: https://issues.apache.org/jira/browse/HADOOP-14902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>
> Attachments: HADOOP-14902.001.patch, HADOOP-14902.002.patch, 
> HADOOP-14902.003.patch
>
>
> LoadGenerator#genFile's write close timing code looks like the following:
> {code}
> startTime = Time.now();
> executionTime[WRITE_CLOSE] += (Time.now() - startTime);
> {code}
> That code will generate a zero (or near zero) write close timing since it 
> isn't actually closing the file in-between timestamp lookups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450983#comment-16450983
 ] 

Hudson commented on HADOOP-12077:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-12077. Provide a multi-URI replication Inode for ViewFs. (xyao: rev 
25a5a4aee8ee51e60097f19b0a0f142f68ab3f55)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsConfig.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/NflyFSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLocalFileSystem.java


> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch, 
> HADOOP-12077.009.patch, HADOOP-12077.010.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes 

[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450982#comment-16450982
 ] 

Hudson commented on HADOOP-14839:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14839. DistCp log output should contain copied and deleted files (xyao: 
rev 60376f9f80776723e2170b91fd26bf8f98aab4dd)
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpOptions.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
* (edit) 
hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp_Counter.properties
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java


> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14839-branch-2.001.patch, 
> HADOOP-14839-branch-2.002.patch, HADOOP-14839.006.patch, 
> HDFS-10234.001.patch, HDFS-10234.002.patch, HDFS-10234.003.patch, 
> HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Wasb mkdirs security checks inconsistent with HDFS

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450975#comment-16450975
 ] 

Hudson commented on HADOOP-14820:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14820 Wasb mkdirs security checks inconsistent with HDFS. (xyao: rev 
a3e1a2dce2b03230ff412128897550e6373ace5d)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/metrics/TestAzureFileSystemInstrumentation.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java


> Wasb mkdirs security checks inconsistent with HDFS
> --
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>Priority: Major
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14820-006.patch, HADOOP-14820-007.patch, 
> HADOOP-14820-branch-2-001.patch.txt, HADOOP-14820.001.patch, 
> HADOOP-14820.002.patch, HADOOP-14820.003.patch, HADOOP-14820.004.patch, 
> HADOOP-14820.005.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450981#comment-16450981
 ] 

Hudson commented on HADOOP-14103:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14103. Sort out hadoop-aws contract-test-options.xml. Contributed (xyao: 
rev 850e626f93e59495b09ce8f6cc4a30d1d30f36f6)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md


> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: HADOOP-14103.001.patch, HADOOP-14103.002.patch, 
> HADOOP-14103.003.patch, HADOOP-14103.004.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450978#comment-16450978
 ] 

Hudson commented on HADOOP-14688:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14688. Intern strings in KeyVersion and EncryptedKeyVersion. (xyao: rev 
89ec91cb004ff36b3b7f327167cb3b45b8baadd2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java


> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png, jxray.report
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14472) Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450841#comment-16450841
 ] 

Hudson commented on HADOOP-14472:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14472. Azure: TestReadAndSeekPageBlobAfterWrite fails (xyao: rev 
756ff412afe48ce811c2e967e044d592ae43ef9c)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestReadAndSeekPageBlobAfterWrite.java


> Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently
> -
>
> Key: HADOOP-14472
> URL: https://issues.apache.org/jira/browse/HADOOP-14472
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14472.000.patch
>
>
> Reported by [HADOOP-14461]
> {code}
> testManySmallWritesWithHFlush(org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite)
>   Time elapsed: 1.051 sec  <<< FAILURE!
> java.lang.AssertionError: hflush duration of 13, less than minimum expected 
> of 20
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.writeAndReadOneFile(TestReadAndSeekPageBlobAfterWrite.java:286)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.testManySmallWritesWithHFlush(TestReadAndSeekPageBlobAfterWrite.java:247)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14440) Add metrics for connections dropped

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450830#comment-16450830
 ] 

Hudson commented on HADOOP-14440:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14440. Add metrics for connections dropped. Contributed by Eric (xyao: 
rev 323f8bb6e47d25b98ceb3c1efa5ae184e1ff7858)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md


> Add metrics for connections dropped
> ---
>
> Key: HADOOP-14440
> URL: https://issues.apache.org/jira/browse/HADOOP-14440
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14440.001.patch, HADOOP-14440.002.patch, 
> HADOOP-14440.003.patch
>
>
> Will be useful to figure out when the NN is getting overloaded with more 
> connections than it can handle



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450852#comment-16450852
 ] 

Hudson commented on HADOOP-14500:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14500. Azure: (xyao: rev d2f0ddc8f6f5026dd5e1aa27d60b736d07d67a79)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestFileSystemOperationsExceptionHandlingMultiThreaded.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestFileSystemOperationExceptionHandling.java


> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14500-001.patch
>
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14491) Azure has messed doc structure

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450842#comment-16450842
 ] 

Hudson commented on HADOOP-14491:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14491. Azure has messed doc structure. Contributed by Mingliang (xyao: 
rev 974f33add21f77fff920caee15d38526ffa5be79)
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md


> Azure has messed doc structure
> --
>
> Key: HADOOP-14491
> URL: https://issues.apache.org/jira/browse/HADOOP-14491
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14491.000.patch, new.png, old.png
>
>
> # The _WASB Secure mode and configuration_ and _Authorization Support in 
> WASB_ sections are missing in the navigation
> # _Authorization Support in WASB_ should be header level 3 instead of level 2 
> # Some of the code format is not specified
> # Sample code indent not unified.
> Let's use the auto-generated navigation instead of manually updating it, just 
> as other documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14431) ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450837#comment-16450837
 ] 

Hudson commented on HADOOP-14431:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14431. ModifyTime of FileStatus returned by SFTPFileSystem's (xyao: rev 
4c06897a3637e60e481b6537e21c6d0d13415d6a)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java


> ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is 
> wrong
> ---
>
> Key: HADOOP-14431
> URL: https://issues.apache.org/jira/browse/HADOOP-14431
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch
>
>
> {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} 
> get FileStatus as code below:
> {code}
>   private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds
>……
>   }
> {code}
> ,which {{attr.getMTime}} return int, which meansthe modTime is wrong



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14485) Redundant 'final' modifier in try-with-resources statement

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450840#comment-16450840
 ] 

Hudson commented on HADOOP-14485:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14485. Redundant 'final' modifier in try-with-resources (xyao: rev 
cc8bcf1efd692d4a5d2c119c222be5f95d3d52e2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* (edit) 
hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestRollingAverages.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java


> Redundant 'final' modifier in try-with-resources statement
> --
>
> Key: HADOOP-14485
> URL: https://issues.apache.org/jira/browse/HADOOP-14485
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14485.001.patch
>
>
> Redundant 'final' modifier in the try-with-resources statement. Any variable 
> declared in the try-with-resources statement is implicitly modified with 
> final.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450839#comment-16450839
 ] 

Hudson commented on HADOOP-14035:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14035. Reduce fair call queue backoff's impact on clients. (xyao: rev 
fd77c7f76bcadfb10f789a95700fd50972a2f292)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java


> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14035.branch-2.8.patch, 
> HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450831#comment-16450831
 ] 

Hudson commented on HADOOP-14428:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14428. s3a: mkdir appears to be broken. Contributed by Mingliang (xyao: 
rev ce634881ced7ff14118a7789cb70ff6428710e00)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> s3a: mkdir appears to be broken
> ---
>
> Key: HADOOP-14428
> URL: https://issues.apache.org/jira/browse/HADOOP-14428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch
>
>
> Reproduction is:
> hadoop fs -mkdir s3a://my-bucket/dir/
> hadoop fs -ls s3a://my-bucket/dir/
> ls: `s3a://my-bucket/dir/': No such file or directory
> I believe this is a regression from HADOOP-14255.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450835#comment-16450835
 ] 

Hudson commented on HADOOP-14478:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14478. Optimize NativeAzureFsInputStream for positional reads. (xyao: 
rev 2777b1d4565efea85ea25fee3327c1ff53ab72f2)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java


> Optimize NativeAzureFsInputStream for positional reads
> --
>
> Key: HADOOP-14478
> URL: https://issues.apache.org/jira/browse/HADOOP-14478
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, 
> HADOOP-14478.003.patch
>
>
> Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of 
> the data length requested for. This would be beneficial for sequential reads. 
> However, for positional reads (seek to specific location, read x number of 
> bytes, seek back to original location) this may not be beneficial and might 
> even download lot more data which are not used later.
> It would be good to override {{readFully(long position, byte[] buffer, int 
> offset, int length)}} for {{NativeAzureFsInputStream}} and make use of 
> {{mark(readLimit)}} as a hint to Azure's BlobInputStream.
> BlobInputStream reference: 
> https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448
> BlobInputStream can consider this as a hint later to determine the amount of 
> data to be read ahead. Changes to BlobInputStream would not be addressed in 
> this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14442) Owner support for ranger-wasb integration

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450798#comment-16450798
 ] 

Hudson commented on HADOOP-14442:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14442. Owner support for ranger-wasb integration. Contributed by (xyao: 
rev fc28cc927cc25c7e63c4191a9a77e84ecb2f2f70)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizerInterface.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteWasbAuthorizerImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorizationWithOwner.java


> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>Priority: Major
>  Labels: filesystem, secure, wasb
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14442.1.patch, HADOOP-14442.patch
>
>
> For the ranger-wasb integration, we need owner information from the metadata 
> information  of the files/folders to be passed along to the ranger authorizer.
> This patch contains the changes related to retrieving the owner from metadata 
> and making it available for ranger plugin that is integrated with wasb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14436) Remove the redundant colon in ViewFs.md

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450826#comment-16450826
 ] 

Hudson commented on HADOOP-14436:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14436. Remove the redundant colon in ViewFs.md. Contributed by (xyao: 
rev 6618442809094aef2370b859fe80b577297725d8)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md


> Remove the redundant colon in ViewFs.md
> ---
>
> Key: HADOOP-14436
> URL: https://issues.apache.org/jira/browse/HADOOP-14436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1, 3.0.0-alpha2
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14436.patch
>
>
> Minor mistake can led the beginner to the wrong way and getting far away from 
> us.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9849) License information is missing for native CRC32 code

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450815#comment-16450815
 ] 

Hudson commented on HADOOP-9849:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-9849. License information is missing for native CRC32 code (xyao: rev 
83b97f8a598944de0e90aab435a38e676e4393b3)
* (edit) LICENSE.txt


> License information is missing for native CRC32 code
> 
>
> Key: HADOOP-9849
> URL: https://issues.apache.org/jira/browse/HADOOP-9849
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-9849.001.patch
>
>
> The following files are licensed under the BSD license but the BSD
> license is not part if the distribution:
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
> I believe this file is BSD as well:
> hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14464) hadoop-aws doc header warning #5 line wrapped

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450800#comment-16450800
 ] 

Hudson commented on HADOOP-14464:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14464. hadoop-aws doc header warning #5 line wrapped. Contributed (xyao: 
rev 2cee811941ef24ba4d28dfeaf96a716e6056f616)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> hadoop-aws doc header warning #5 line wrapped
> -
>
> Key: HADOOP-14464
> URL: https://issues.apache.org/jira/browse/HADOOP-14464
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14464.001.patch
>
>
> The line was probably automatically wrapped by the editor:
> {code}
> Warning #5: The S3 client provided by Amazon EMR are not from the Apache
> Software foundation, and are only supported by Amazon.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14460) Azure: update doc for live and contract tests

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450820#comment-16450820
 ] 

Hudson commented on HADOOP-14460:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14460. Azure: update doc for live and contract tests. Contributed (xyao: 
rev 5c6f22d62ea9e6fbe4e5411d5934958fcbf15dac)
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md


> Azure: update doc for live and contract tests
> -
>
> Key: HADOOP-14460
> URL: https://issues.apache.org/jira/browse/HADOOP-14460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14460.000.patch, HADOOP-14460.001.patch, 
> HADOOP-14460.002.patch
>
>
> In {{SimpleKeyProvider}}, we have following code for getting the key:
> {code}
>   protected static final String KEY_ACCOUNT_KEY_PREFIX =
>   "fs.azure.account.key.";
> ...
>   protected String getStorageAccountKeyName(String accountName) {
> return KEY_ACCOUNT_KEY_PREFIX + accountName;
>   }
> {code}
> While in documentation {{index.md}}, we have:
> {code}
>   
> fs.azure.account.key.youraccount.blob.core.windows.net
> YOUR ACCESS KEY
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14458) Add missing imports to TestAliyunOSSFileSystemContract.java

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450804#comment-16450804
 ] 

Hudson commented on HADOOP-14458:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14458. Add missing imports to (xyao: rev 
d86fbcf0840dc1e29b2d678149bdfb5cc61ba85d)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java


> Add missing imports to TestAliyunOSSFileSystemContract.java
> ---
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> 

[jira] [Commented] (HADOOP-14466) Remove useless document from TestAliyunOSSFileSystemContract.java

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450816#comment-16450816
 ] 

Hudson commented on HADOOP-14466:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14466. Remove useless document from (xyao: rev 
0618f490ddadbf50bdd4532747df775105d2385e)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java


> Remove useless document from TestAliyunOSSFileSystemContract.java
> -
>
> Key: HADOOP-14466
> URL: https://issues.apache.org/jira/browse/HADOOP-14466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14466.001.patch
>
>
> The following document is not valid.
> {code:title=TestAliyunOSSFileSystemContract.java}
>  * This uses BlockJUnit4ClassRunner because FileSystemContractBaseTest from
>  * TestCase which uses the old Junit3 runner that doesn't ignore assumptions
>  * properly making it impossible to skip the tests if we don't have a valid
>  * bucket.
> {code}
> HADOOP-14180 updated FIleSystemContractBaseTest to use JUnit 4, so this 
> sentence is no longer valid.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450817#comment-16450817
 ] 

Hudson commented on HADOOP-13921:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-13921. Remove log4j classes from JobConf. (xyao: rev 
9d9e56c39f848719814d1f25db726c0e9608c89f)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml


> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450806#comment-16450806
 ] 

Hudson commented on HADOOP-14456:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14456. Modifier 'static' is redundant for inner enums. (xyao: rev 
71c34c715588c6c160449801b85f40f4317f3264)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Constant.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/SetAttr3.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Compression.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipDecompressor.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/retry/UnreliableImplementation.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountInterface.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibCompressor.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcMessage.java


> Modifier 'static' is redundant for inner enums
> --
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14456.001.patch, HADOOP-14456.002.patch
>
>
> Java enumeration type is a static constant, implicitly modified with static 
> final,Modifier 'static' is redundant for inner enums less.So I suggest 
> deleting the 'static' modifier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450753#comment-16450753
 ] 

Hudson commented on HADOOP-14407:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14407. DistCp - Introduce a configurable copy buffer size. (Omkar (xyao: 
rev 1252aa37811892a269f3feb298cf66faee81d9c0)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpOptions.java
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, 
> HADOOP-14407.002.patch, HADOOP-14407.003.patch, 
> HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, 
> HADOOP-14407.004.patch, HADOOP-14407.branch2.002.patch, 
> TotalTime-vs-CopyBufferSize.jpg
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450783#comment-16450783
 ] 

Hudson commented on HADOOP-14180:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14180. FileSystem contract tests to replace JUnit 3 with 4. (xyao: rev 
527c9dde40b2030afb981e78ab1df683eacd33c2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractEmulator.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractMocked.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractLive.java
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractPageBlobLive.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java


> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>Priority: Major
>  Labels: test
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, 
> HADOOP-14180.002.patch, HADOOP-14180.003.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11869) Suppress ParameterNumber checkstyle violations for overridden methods

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450754#comment-16450754
 ] 

Hudson commented on HADOOP-11869:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-11869. Suppress ParameterNumber checkstyle violations for (xyao: rev 
83fef23141d27591c3a1ea5d02b8056cf44e4f56)
* (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml


> Suppress ParameterNumber checkstyle violations for overridden methods
> -
>
> Key: HADOOP-11869
> URL: https://issues.apache.org/jira/browse/HADOOP-11869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-11869.ParameterNumber.patch
>
>
> There seem to be a lot of arcane errors being caused by the checkstyle 
> rules/script. Real issues tend to be buried in this noise. Some examples :
> 1. "Line is longer than 80 characters" - this shows up even for cases like 
> import statements, package names
> 2. "Missing a Javadoc comment." - for every private member including cases 
> like "Configuration conf". 
> Having rules like these will result in a large number of pre-commit job 
> failures. We should fine tune the rules used for checkstyle. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450751#comment-16450751
 ] 

Hudson commented on HADOOP-11572:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-11572. s3a delete() operation fails during a concurrent delete of (xyao: 
rev bbd9e9d7d512602eb55ecc5bdf0c197278f9a890)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFailureHandling.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> s3a delete() operation fails during a concurrent delete of child entries
> 
>
> Key: HADOOP-11572
> URL: https://issues.apache.org/jira/browse/HADOOP-11572
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-11572-001.patch, HADOOP-11572-branch-2-002.patch, 
> HADOOP-11572-branch-2-003.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a 
> child entry during a recursive directory delete is propagated as an 
> exception, rather than ignored as a detail which idempotent operations should 
> just ignore.
> the exception should be caught and, if a file not found problem, logged 
> rather than propagated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14419) Remove findbugs report from docs profile

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450743#comment-16450743
 ] 

Hudson commented on HADOOP-14419:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14419. Remove findbugs report from docs profile. Contributed by (xyao: 
rev f560d46e3ac92d5afeab38e08be3f6b1b1a6811d)
* (edit) hadoop-project-dist/pom.xml
* (edit) BUILDING.txt


> Remove findbugs report from docs profile
> 
>
> Key: HADOOP-14419
> URL: https://issues.apache.org/jira/browse/HADOOP-14419
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14419.01.patch, HADOOP-14419.02.patch
>
>
> Based on [~aw]'s comments on HADOOP-12557 findbugs report is not needed in 
> the distro.
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450745#comment-16450745
 ] 

Hudson commented on HADOOP-14415:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14415. Use java.lang.AssertionError instead of (xyao: rev 
cdf35ee06bd2806e5fbe677b2c481536e68cbd6f)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShell.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java


> Use java.lang.AssertionError instead of junit.framework.AssertionFailedError
> 
>
> Key: HADOOP-14415
> URL: https://issues.apache.org/jira/browse/HADOOP-14415
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14415.001.patch
>
>
> When reviewing HADOOP-14180, I found some test codes throw 
> junit.framework.AssertionFailedError. org.junit.Assert no longer throws 
> AssertionFailedError, so we should use AssertionError instead of 
> AssertionFailedError.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14426) Upgrade Kerby version from 1.0.0-RC2 to 1.0.0

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450782#comment-16450782
 ] 

Hudson commented on HADOOP-14426:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14426. Upgrade Kerby version from 1.0.0-RC2 to 1.0.0. Contributed (xyao: 
rev df1496a39b8374acefe2c5da7b75e5971029ce35)
* (edit) hadoop-project/pom.xml


> Upgrade Kerby version from 1.0.0-RC2 to 1.0.0
> -
>
> Key: HADOOP-14426
> URL: https://issues.apache.org/jira/browse/HADOOP-14426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14426-001.patch
>
>
> Apache Kerby 1.0.0 with some bug fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14434) Use MoveFileEx to allow renaming a file when the destination exists

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450748#comment-16450748
 ] 

Hudson commented on HADOOP-14434:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14434. Use MoveFileEx to allow renaming a file when the (xyao: rev 
27845d5269b3e4b73ebd1eb6f5ca17efc62d8097)
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


> Use MoveFileEx to allow renaming a file when the destination exists
> ---
>
> Key: HADOOP-14434
> URL: https://issues.apache.org/jira/browse/HADOOP-14434
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.7.1, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: windows
> Fix For: 2.7.4, 3.0.0-alpha4
>
> Attachments: HADOOP-14434.002.patch, HDFS-11713.001.patch
>
>
> The {{NativeIO.c#renameTo0}} currently uses {{MoveFile}} Windows system call, 
> which fails when renaming a file to a destination that already exists.
> This makes the {{TestRollingUpgrade.testRollback}} test fail on Windows, as 
> during that execution, a DataNode tries to rename block's meta file to a 
> destination that exists.
> The proposal is to change to using {{MoveFileEx}} Windows call, and passing 
> in {{MOVEFILE_REPLACE_EXISTING}} flag to force the renaming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14432) S3A copyFromLocalFile to be robust, tested

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450756#comment-16450756
 ] 

Hudson commented on HADOOP-14432:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14432. S3A copyFromLocalFile to be robust, tested. Contributed by (xyao: 
rev d1b23b3dcaada06a39a5b55601e8f4036912d719)
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> S3A copyFromLocalFile to be robust, tested
> --
>
> Key: HADOOP-14432
> URL: https://issues.apache.org/jira/browse/HADOOP-14432
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14432-001.patch
>
>
> {{S3AFileSystem.copyFromLocalFile()}}
> Doesn't
> * check for local file existing. Fix: check and raise FNFE (today: 
> AmazonClientException is raised)
> * check for dest being a directory. Fix: Better checks before upload
> * have any tests. Fix: write the tests
> this is related to the committer work, but doesn't depend on it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450785#comment-16450785
 ] 

Hudson commented on HADOOP-14399:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14399. Configuration does not correctly XInclude absolute file (xyao: 
rev 0d55cc6a37a413ebe35c02a18c0bce2f4d490e6f)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14427) Avoid reloading of Configuration in ViewFileSystem creation.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450747#comment-16450747
 ] 

Hudson commented on HADOOP-14427:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14427. Avoid reloading of Configuration in ViewFileSystem (xyao: rev 
37d7afc29bd7035d14123ceead15dce7d7a32e0d)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> Avoid reloading of Configuration in ViewFileSystem creation.
> 
>
> Key: HADOOP-14427
> URL: https://issues.apache.org/jira/browse/HADOOP-14427
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14427-01.patch
>
>
> Avoid {{new Configuration()}} in below code. during viewfilesystem creation
> {code}public InternalDirOfViewFs(final InodeTree.INodeDir dir,
> final long cTime, final UserGroupInformation ugi, URI uri)
>   throws URISyntaxException {
>   myUri = uri;
>   try {
> initialize(myUri, new Configuration());
>   } catch (IOException e) {
> throw new RuntimeException("Cannot occur");
>   }
>   theInternalDir = dir;
>   creationTime = cTime;
>   this.ugi = ugi;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450776#comment-16450776
 ] 

Hudson commented on HADOOP-14449:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14449. The ASF Header in ComparableVersion.java and (xyao: rev 
ddfdfbdbd1711e77b0051a67ac6a9f5e6c7bf574)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java


> The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not 
> correct
> 
>
> Key: HADOOP-14449
> URL: https://issues.apache.org/jira/browse/HADOOP-14449
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14449.001.patch
>
>
> The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not 
> correct



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450786#comment-16450786
 ] 

Hudson commented on HADOOP-14430:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's (xyao: 
rev 605b29d3de3febea73c6ddf1025f04c69ac3b575)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java


> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14166) Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not used

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450777#comment-16450777
 ] 

Hudson commented on HADOOP-14166:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14166. Reset the DecayRpcScheduler AvgResponseTime metric to zero (xyao: 
rev 49ea48078bb322c1840d9a50cdd9a65ebad0cafa)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java


> Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not 
> used
> -
>
> Key: HADOOP-14166
> URL: https://issues.apache.org/jira/browse/HADOOP-14166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14166.001.patch
>
>
> {noformat}
>  "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
> "modelerType" : "DecayRpcSchedulerMetrics2.ipc.8020",
> "tag.Context" : "ipc.8020",
> "tag.Hostname" : "host1",
> "DecayedCallVolume" : 3,
> "UniqueCallers" : 1,
> "Caller(root).Volume" : 266,
> "Caller(root).Priority" : 3,
> "Priority.0.AvgResponseTime" : 6.151201023385511E-5,
> "Priority.1.AvgResponseTime" : 0.0,
> "Priority.2.AvgResponseTime" : 0.0,
> "Priority.3.AvgResponseTime" : 1.184686336544601,
> "Priority.0.CompletedCallVolume" : 0,
> "Priority.1.CompletedCallVolume" : 0,
> "Priority.2.CompletedCallVolume" : 0,
> "Priority.3.CompletedCallVolume" : 2,
> "CallVolume" : 266
> {noformat}
> "Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue 
> is not used for long time.
> {code}
>   if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
> if (enableDecay) {
>   final double decayed = decayFactor * lastAvg + averageResponseTime;
>   LOG.info("Decayed "  + i + " time " +   decayed);
>   responseTimeAvgInLastWindow.set(i, decayed);
> } else {
>   responseTimeAvgInLastWindow.set(i, averageResponseTime);
> }
>   }
> {code}
> we should reset it to zero when above condition is false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14416) Path starting with 'wasb:///' not resolved correctly while authorizing with WASB-Ranger

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450738#comment-16450738
 ] 

Hudson commented on HADOOP-14416:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14416. Path starting with 'wasb:///' not resolved correctly while (xyao: 
rev 5d20b2eeab1dfbf5591e4e9eecde3517d6155bd0)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbRemoteCallHelper.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AbstractWasbTestBase.java


> Path starting with 'wasb:///' not resolved correctly while authorizing with 
> WASB-Ranger
> ---
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14416.001.patch, HADOOP-14416.001.patch, 
> HADOOP-14416.002.patch, HADOOP-14416.003.patch, Non-SecureRun-Logs.txt, 
> SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14401) maven-project-info-reports-plugin can be removed

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450724#comment-16450724
 ] 

Hudson commented on HADOOP-14401:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14401. maven-project-info-reports-plugin can be removed. (xyao: rev 
ccdcc34490d9cb0e54e018440101420fca2eeb0c)
* (edit) hadoop-tools/hadoop-aliyun/pom.xml
* (edit) hadoop-tools/hadoop-azure-datalake/pom.xml
* (edit) hadoop-tools/hadoop-azure/pom.xml
* (edit) hadoop-tools/hadoop-kafka/pom.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* (edit) hadoop-project/pom.xml
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (edit) hadoop-tools/hadoop-openstack/pom.xml
* (edit) hadoop-common-project/hadoop-auth/pom.xml


> maven-project-info-reports-plugin can be removed
> 
>
> Key: HADOOP-14401
> URL: https://issues.apache.org/jira/browse/HADOOP-14401
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14401.01.patch
>
>
> By default {{maven-project-info-reports-plugin}} is called at site phrase but 
> in Hadoop Main's pom we use {{excludeDefaults}} tag which [excludes 
> maven-project-info-reports-plugin from the 
> process|https://maven.apache.org/pom.html#Reporting]. It will run only when 
> it is called directly.
> I found two invocation: hadoop-hdfs-httpfs and hadoop-auth.
> These invocations seems unnecessary as I pointed in HADOOP-14393.
> This plugin can be removed. Makes the understanding of site generation 
> harder. It took me a while finding out the role of the plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14413) Add Javadoc comment for jitter parameter on CachingGetSpaceUsed

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450720#comment-16450720
 ] 

Hudson commented on HADOOP-14413:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14413. Add Javadoc comment for jitter parameter on (xyao: rev 
d76fedffd7434377f13fa167af50d7f7a87215ca)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java


> Add Javadoc comment for jitter parameter on CachingGetSpaceUsed
> ---
>
> Key: HADOOP-14413
> URL: https://issues.apache.org/jira/browse/HADOOP-14413
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14413.000.patch, HADOOP-14413.001.patch
>
>
> When the jitter parameter was added in HADOOP-12975, the Javadoc was not 
> updated accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14361) Azure: NativeAzureFileSystem.getDelegationToken() call fails sometimes when invoked concurrently

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450717#comment-16450717
 ] 

Hudson commented on HADOOP-14361:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14361. Azure: NativeAzureFileSystem.getDelegationToken() call (xyao: rev 
aa6f3238d6b2e557589ae1dd78e58bfc5eb21721)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


> Azure: NativeAzureFileSystem.getDelegationToken() call fails sometimes when 
> invoked concurrently
> 
>
> Key: HADOOP-14361
> URL: https://issues.apache.org/jira/browse/HADOOP-14361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Trupti Dhavle
>Assignee: Santhosh G Nayak
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14361.1.patch
>
>
> Sometimes {{NativeAzureFileSystem.getDelegationToken()}} method fails with 
> below exception when invoked concurrently,
> {code}Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Authentication failed, URL: 
> http://delegationtokenmanger/?op=GETDELEGATIONTOKEN=rm%2Fhostname%40realm,
>  status: 401, message: Authentication required
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:278)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:195)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
>   at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$2.run(NativeAzureFileSystem.java:2993)
>   at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$2.run(NativeAzureFileSystem.java:2990)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>   ... 29 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14377) Increase Common test timeouts from 1 second to 10 seconds

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450702#comment-16450702
 ] 

Hudson commented on HADOOP-14377:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14377. Increase Common test timeouts from 1 second to 10 seconds. (xyao: 
rev d6d1e2438bae8e39f2e43187c7186b0dacd41345)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/service/TestCompositeService.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestGlobPattern.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFSFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSortedMapWritable.java


> Increase Common test timeouts from 1 second to 10 seconds
> -
>
> Key: HADOOP-14377
> URL: https://issues.apache.org/jira/browse/HADOOP-14377
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14377.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14405) Fix performance regression due to incorrect use of DataChecksum

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450706#comment-16450706
 ] 

Hudson commented on HADOOP-14405:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14405. Fix performance regression due to incorrect use of (xyao: rev 
f47199ee4a6cda8b835a034c6765b63680effdd2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java


> Fix performance regression due to incorrect use of DataChecksum
> ---
>
> Key: HADOOP-14405
> URL: https://issues.apache.org/jira/browse/HADOOP-14405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native, performance
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11765.patch
>
>
> Recently I have upgraded my Hadoop version from 2.6 to 3.0, and I find that 
> the write performance decreased by 13%. After some days comparative analysis, 
> It's seems introduced by HADOOP-10865. 
> Since James Thomas have done the work that native checksum can run against 
> byte[] arrays instead of just against byte buffers, we may use native method 
> preferential because it runs faster than others.
> [~szetszwo] and [~iwasakims] could you take a look at this to see if  it make 
> bad effect on your benchmark test? [~tlipcon] could you help to see if I have 
> make mistakes in this patch?
> thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14373) License error In org.apache.hadoop.metrics2.util.Servers

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450707#comment-16450707
 ] 

Hudson commented on HADOOP-14373:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14373. License error in org.apache.hadoop.metrics2.util.Servers. (xyao: 
rev afef64b6acfb2749f9f9265fe0989cb853bd78b2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java


> License error In org.apache.hadoop.metrics2.util.Servers
> 
>
> Key: HADOOP-14373
> URL: https://issues.apache.org/jira/browse/HADOOP-14373
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14373.001.patch
>
>
> License error In org.apache.hadoop.metrics2.util.Servers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14400) Fix warnings from spotbugs in hadoop-tools

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450708#comment-16450708
 ] 

Hudson commented on HADOOP-14400:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14400. Fix warnings from spotbugs in hadoop-tools. Contributed by (xyao: 
rev e5928007fae7230bd317300c5cb5df36906563f7)
* (edit) 
hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/TotalHeapUsageEmulatorPlugin.java
* (edit) 
hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridmixMemoryEmulation.java
* (edit) 
hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/InputStriper.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/web/TestSLSWebApp.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/web/SLSWebApp.java
* (edit) 
hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/FairSchedulerMetrics.java


> Fix warnings from spotbugs in hadoop-tools
> --
>
> Key: HADOOP-14400
> URL: https://issues.apache.org/jira/browse/HADOOP-14400
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: findbugs
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14400.001.patch, HADOOP-14400.002.patch
>
>
> Fix 4 warnings in hadoop-tools project since moved to spotbugs.
> # Return value of new 
> org.apache.hadoop.tools.rumen.datatypes.DefaultDataType(String) ignored, but 
> method has no side effect At MapReduceJobPropertiesParser.java
> # org.apache.hadoop.mapred.gridmix.InputStriper$1.compare(Map$Entry, 
> Map$Entry) incorrectly handles double value
> # Useless object stored in variable keysToUpdateAsFolder of method 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, 
> boolean) At NativeAzureFileSystem.java
> # org.apache.hadoop.yarn.sls.SLSRunner.simulateInfoMap is a mutable 
> collection At SLSRunner.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14376) Memory leak when reading a compressed file using the native library

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450728#comment-16450728
 ] 

Hudson commented on HADOOP-14376:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14376. Memory leak when reading a compressed file using the (xyao: rev 
192f1e63180d4ddfc7fa204090a3341190f1b0df)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionOutputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DecompressorStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CodecPool.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionInputStream.java


> Memory leak when reading a compressed file using the native library
> ---
>
> Key: HADOOP-14376
> URL: https://issues.apache.org/jira/browse/HADOOP-14376
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, io
>Affects Versions: 2.7.0
>Reporter: Eli Acherkan
>Assignee: Eli Acherkan
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: Bzip2MemoryTester.java, HADOOP-14376.001.patch, 
> HADOOP-14376.002.patch, HADOOP-14376.003.patch, HADOOP-14376.004.patch, 
> log4j.properties
>
>
> Opening and closing a large number of bzip2-compressed input streams causes 
> the process to be killed on OutOfMemory when using the native bzip2 library.
> Our initial analysis suggests that this can be caused by 
> {{DecompressorStream}} overriding the {{close()}} method, and therefore 
> skipping the line from its parent: 
> {{CodecPool.returnDecompressor(trackedDecompressor)}}. When the decompressor 
> object is a {{Bzip2Decompressor}}, its native {{end()}} method is never 
> called, and the allocated memory isn't freed.
> If this analysis is correct, the simplest way to fix this bug would be to 
> replace {{in.close()}} with {{super.close()}} in {{DecompressorStream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14108) CLI MiniCluster: add an option to specify NameNode HTTP port

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450577#comment-16450577
 ] 

Hudson commented on HADOOP-14108:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14108. CLI MiniCluster: add an option to specify NameNode HTTP 
(aengineer: rev 28529e52045236282cb53d09751aef9b5cc542e5)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/MiniHadoopClusterManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm


> CLI MiniCluster: add an option to specify NameNode HTTP port
> 
>
> Key: HADOOP-14108
> URL: https://issues.apache.org/jira/browse/HADOOP-14108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14108.1.patch
>
>
> About CLI MiniCluster, NameNode HTTP port is randomly determined. If you want 
> to see nn-web-ui or do fsck, you need to look for the port number from the 
> minicluster's log. It would be useful if users can specify the port number.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14375) Remove tomcat support from hadoop-functions.sh

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450726#comment-16450726
 ] 

Hudson commented on HADOOP-14375:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14375. Remove tomcat support from hadoop-functions.sh. (xyao: rev 
d7cbd2e985ca934c6eb99753f969d771c9e91729)
* (delete) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_finalize_catalina_opts.bats
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> Remove tomcat support from hadoop-functions.sh
> --
>
> Key: HADOOP-14375
> URL: https://issues.apache.org/jira/browse/HADOOP-14375
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha4
>Reporter: Allen Wittenauer
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14375.001.patch
>
>
> Now that tomcat is no longer needed by Hadoop, let's rip out the awful tomcat 
> shell functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of FileSystem#newFSDataOutputStreamBuilder before the API becomes stable

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450699#comment-16450699
 ] 

Hudson commented on HADOOP-14384:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14384. Reduce the visibility of (xyao: rev 
1675f5efa7d3f5b9860707e5f94e6434ebf34d81)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStreamBuilder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> Reduce the visibility of FileSystem#newFSDataOutputStreamBuilder before the 
> API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14386) Rewind trunk from Guava 21.0 back to Guava 11.0.2

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450698#comment-16450698
 ] 

Hudson commented on HADOOP-14386:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14386. Rewind trunk from Guava 21.0 back to Guava 11.0.2. (xyao: rev 
06c940a317989a7076b10d8f026561a1f83ad132)
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerUtilities.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/DirectExecutorService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclTransformation.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerUtilities.java


> Rewind trunk from Guava 21.0 back to Guava 11.0.2
> -
>
> Key: HADOOP-14386
> URL: https://issues.apache.org/jira/browse/HADOOP-14386
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14386.001.patch, HADOOP-14386.002.patch, 
> HADOOP-14386.003.patch, HADOOP-14386.004.patch
>
>
> As an alternative to reverting or shading HADOOP-10101 (the upgrade of Guava 
> from 11.0.2 to 21.0), HADOOP-14380 makes the Guava version configurable. 
> However, it still doesn't compile with Guava 11.0.2, since HADOOP-10101 chose 
> to use the moved Guava classes rather than replacing them with alternatives.
> This JIRA aims to make Hadoop compatible with Guava 11.0.2 as well as 21.0 by 
> replacing usage of these moved Guava classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14410) Correct spelling of 'beginning' and variants

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450719#comment-16450719
 ] 

Hudson commented on HADOOP-14410:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14410. Correct spelling of  'beginning' and variants. Contributed (xyao: 
rev 9247aa23efd50433937dbcfff5cf65b8d64a2459)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsNetgroupMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/QueuePriorityContainerCandidateSelector.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.java


> Correct spelling of  'beginning' and variants
> -
>
> Key: HADOOP-14410
> URL: https://issues.apache.org/jira/browse/HADOOP-14410
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dongtao Zhang
>Assignee: Dongtao Zhang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14410-v001.patch
>
>
> Wrong spelling ”begining“ should be changed to ”beginning“.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450742#comment-16450742
 ] 

Hudson commented on HADOOP-14412:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14412. HostsFileReader#getHostDetails is very expensive on large (xyao: 
rev a1ad4ea273ecc10cc5dad6465ee9bdff233e7666)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHostsFileReader.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java


> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14412-branch-2.001.patch, 
> HADOOP-14412-branch-2.002.patch, HADOOP-14412-branch-2.002.patch, 
> HADOOP-14412-branch-2.8.002.patch, HADOOP-14412.001.patch, 
> HADOOP-14412.002.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14056) Update maven-javadoc-plugin to 2.10.4

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450575#comment-16450575
 ] 

Hudson commented on HADOOP-14056:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14056. Update maven-javadoc-plugin to 2.10.4. (aengineer: rev 
31f306c7d1b648b607bf51f9b2bd9dd5ee9b99d1)
* (edit) hadoop-project/pom.xml
* (edit) pom.xml


> Update maven-javadoc-plugin to 2.10.4
> -
>
> Key: HADOOP-14056
> URL: https://issues.apache.org/jira/browse/HADOOP-14056
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14056.01.patch
>
>
> I'm seeing the following warning in OpenJDK 9.
> {noformat}
> [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc 
> ---
> [WARNING] Unable to find the javadoc version: Unrecognized version of 
> Javadoc: 'java version "9-ea"
> Java(TM) SE Runtime Environment (build 9-ea+154)
> Java HotSpot(TM) 64-Bit Server VM (build 9-ea+154, mixed mode)
> ' near index 37
> (?s).*?([0-9]+\.[0-9]+)(\.([0-9]+))?.*
>  ^
> [WARNING] Using the Java the version instead of, i.e. 0.0
> {noformat}
> Need to update this to 2.10.4. (MJAVADOC-441)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13930) Azure: Add Authorization support to WASB

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450570#comment-16450570
 ] 

Hudson commented on HADOOP-13930:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-13930. Azure: Add Authorization support to WASB. Contributed by 
(aengineer: rev e291aba474086a8f23b5969e3c86bb3786d6a6e0)
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteWasbAuthorizerImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizationOperations.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizationException.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SecureStorageInterfaceImpl.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/package.html
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/Constants.java
* (add) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbDelegationTokenIdentifier.java
* (add) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizerInterface.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteSASKeyGeneratorImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbTokenRenewer.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
Revert "HADOOP-13930. Azure: Add Authorization support to WASB. (aengineer: rev 
bb0bc7d909b17cf50e28ae153a5ff2b78ec13b44)
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbDelegationTokenIdentifier.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizationOperations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteSASKeyGeneratorImpl.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizerInterface.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
* (delete) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizationException.java
* (delete) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SecureStorageInterfaceImpl.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/Constants.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/package.html
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteWasbAuthorizerImpl.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbTokenRenewer.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
HADOOP-13930. Azure: Add Authorization support to WASB. Contributed by 
(aengineer: rev 223c26853527f1f42f0626ad6f2f233f7984bb5b)
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizationOperations.java
* (add) 

[jira] [Commented] (HADOOP-14026) start-build-env.sh: invalid docker image name

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450574#comment-16450574
 ] 

Hudson commented on HADOOP-14026:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14026. start-build-env.sh: invalid docker image name (Contributed 
(aengineer: rev c854e859eb9d2348ab2f3d516c81deca479c2949)
* (edit) start-build-env.sh


> start-build-env.sh: invalid docker image name
> -
>
> Key: HADOOP-14026
> URL: https://issues.apache.org/jira/browse/HADOOP-14026
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Gergő Pásztor
>Assignee: Gergő Pásztor
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14026_v1.patch, HADOOP-14026_v2.patch
>
>
> start-build-env.sh using the current user name to generate a docker image 
> name. But the current user name can contains some not english characters and 
> upper letters (after all this is usually the name/nickname of the owner). 
> Both of them are not supported in docker image names, so the script will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14087) S3A typo in pom.xml test exclusions

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450584#comment-16450584
 ] 

Hudson commented on HADOOP-14087:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14087. S3A typo in pom.xml test exclusions. Contributed by Aaron 
(aengineer: rev 6df1365c328db198bbb95cd71d466bbe13d06be8)
* (edit) hadoop-tools/hadoop-aws/pom.xml


> S3A typo in pom.xml test exclusions
> ---
>
> Key: HADOOP-14087
> URL: https://issues.apache.org/jira/browse/HADOOP-14087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14087.001.patch, HADOOP-14087.002.patch
>
>
> Noticed a copy/paste typo in hadoop-tools/hadoop-aws/pom.xml:
> {code:xml}
>   
> **/ITestJets3tNativeS3FileSystemContract.java
>   **/ITest*Root*.java
>   **/ITestS3AFileContextStatistics.java
>   **/ITestS3AHuge*.java
> {code}  
> That is in excludes section so the last line should be 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14048) REDO operation of WASB#AtomicRename should create placeholder blob for destination folder

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450582#comment-16450582
 ] 

Hudson commented on HADOOP-14048:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14048. REDO operation of WASB#AtomicRename should create (aengineer: rev 
e65d8fb3491b328f61ec3b24b837d0249beec4da)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


> REDO operation of WASB#AtomicRename should create placeholder blob for 
> destination folder
> -
>
> Key: HADOOP-14048
> URL: https://issues.apache.org/jira/browse/HADOOP-14048
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Reporter: NITIN VERMA
>Assignee: NITIN VERMA
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14048.patch
>
>
> While doing manual testing, I realized that the crash recovery of 
> AtomicRename operation of a folder in AzureNativeFileSystem doesn't create a 
> placeholder property blob for destination folder. Due to this bug, the 
> destination folder can not be renamed again.
> Below is how I tested this:
> 1. Create a test directory as "/test/A"
> 2. Create 15 block blobs in "/test/A" folder.
> 3. Run "hadoop fs -mv /test/A /test/B" command and crash it as soon as 
> /test/A-RenamePending.json file is created.
> 4. Now run "hadoop fs -lsr /test" command, which should complete the pending 
> rename operation (redo) as a part of crash recovery. 
> 5. The REDO method copies the pending files from source folder to destination 
> folder (by consulting A-RenamePending.json file), but it doesn't create a 
> 0-byte property blob for /test/B folder, which is a bug as that folder will 
> not be usable for many operations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14150) Implement getHomeDirectory() method in NativeAzureFileSystem

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450590#comment-16450590
 ] 

Hudson commented on HADOOP-14150:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
HADOOP-14150. Implement getHomeDirectory() method in (aengineer: rev 
afc2c438c1b052ea34057260cbacec6e49d45f6a)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


> Implement getHomeDirectory() method in NativeAzureFileSystem
> 
>
> Key: HADOOP-14150
> URL: https://issues.apache.org/jira/browse/HADOOP-14150
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Namit Maheshwari
>Assignee: Santhosh G Nayak
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14150.1.patch
>
>
> {{org.apache.hadoop.fs.azure.NativeAzureFileSystem}} does not override 
> {{FileSystem#getHomeDirectory()}} method.
> So, whenever {{NativeAzureFileSystem#getHomeDirectory()}} gets called 
> {{getHomeDirectory()}} from {{FileSystem}} will be invoked, which has code 
> like below,
> {code}
> public Path getHomeDirectory() {
> return this.makeQualified(
> new Path(USER_HOME_PREFIX + "/" + System.getProperty("user.name")));
>   }
> {code}
> In secure environment, it returns home directory of 
> {{System.getProperty("user.name")}} instead of Kerberos principal/UGI.
> So, the proposal is to override the {{getHomeDirectory()}} method in 
> {{NativeAzureFileSystem}} and have it return the home directory for the 
> Kerberos principal/ugi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15372) Race conditions and possible leaks in the Shell class

2018-04-24 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450470#comment-16450470
 ] 

Miklos Szegedi commented on HADOOP-15372:
-

Thank you for the patch [~ebadger].
{code:java}
shell.getProcess().destroy();{code}
I see that you use SIGTERM instead of SIGKILL. This may still leak child 
processes.
{code:java}
destroyShellProcesses(getAllShells());
if(!exec.awaitTermination(10, TimeUnit.SECONDS)) {
  destroyShellProcesses(getAllShells());
}{code}
This may still leak shells, if the creator thread is waiting itself on 
something else and it creates the subshell, once the second {{getAllShells()}} 
is called.

I have a general concern with the original patch that any other users of the 
Shell class may need to change their code to use the same pattern. There are at 
least 5 other users of Shell in the codebase.

Have you considered using process groups? Process groups may also solve the 
case when the node manager signals the container localizer process with 
SIGKILL. Currently, if this happens the shells will leak.

> Race conditions and possible leaks in the Shell class
> -
>
> Key: HADOOP-15372
> URL: https://issues.apache.org/jira/browse/HADOOP-15372
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Miklos Szegedi
>Assignee: Eric Badger
>Priority: Minor
> Attachments: HADOOP-15372.001.patch
>
>
> YARN-5641 introduced some cleanup code in the Shell class. It has a race 
> condition. {{Shell.
> runCommand()}} can be called while/after {{Shell.getAllShells()}} returned 
> all the shells to be cleaned up. This new thread can avoid the clean up, so 
> that the process held by it can be leaked causing leaked localized files/etc.
> I see another issue as well. {{Shell.runCommand()}} has a finally block with 
> a {{
> process.destroy();}} to clean up. However, the try catch block does not cover 
> all instructions after the process is started, so for example we can exit the 
> thread and leak the process, if {{
> timeOutTimer.schedule(timeoutTimerTask, timeOutInterval);}} causes an 
> exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450396#comment-16450396
 ] 

Steve Loughran commented on HADOOP-15408:
-

We've always had a "don't mix hadoop-* jars" policy, though never one on "what 
if you have >1 version on the same CP" 
Ignoring strict policy "because we said so" rules, it'd be good for at least on 
those minor x.y.z releases for thing not to break.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2018-04-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450355#comment-16450355
 ] 

Steve Loughran commented on HADOOP-15410:
-

We don't want it at compile as then everyone downstream gets it, but they don't 
need it & trying to force it in creates its own problems

what if you mark it as provided?

> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Priority: Major
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450342#comment-16450342
 ] 

Xiao Chen commented on HADOOP-15408:


Sorry I have not debugged this, so could be missing something. Only from the 
description:
{quote}
This is because the container loaded KMSDelegationToken class from an older jar 
and KMSLegacyDelegationTokenIdentifier from new jar and it fails when 
KMSLegacyDelegationTokenIdentifier wants to read TOKEN_LEGACY_KIND from 
KMSDelegationToken which doesn't exist before.
{quote}
Would something like  [^split.patch] work?

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15408:
---
Attachment: split.patch

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
> Attachments: split.patch
>
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450294#comment-16450294
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

{quote}This fix also broke Ranger, for the same reason.
 {quote}
Thanks Arpit for chiming in. Since there are multiple components affected by 
this change, it makes sense to fix in hadoop itself.

bq. I didn't check how the class loader would work because it would see 
'kms-dt' and be able to map to both the legacy identifier from the new jar, and 
the only identifier from the old jar.
Thanks [~xiaochen] for the comment.
I don't completely grasp the idea. I would really appreciate if you can share 
sample pseudo code explaining the idea.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450285#comment-16450285
 ] 

Arpit Agarwal commented on HADOOP-15408:


This fix also broke Ranger, for the same reason.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450213#comment-16450213
 ] 

Xiao Chen commented on HADOOP-15408:


ouch, didn't test it but feels like a scenario that we have to support

Maybe we can split the new class {{KMSDelegationToken}} into 2 separate 
classes, so there will be no dependency on each other. I didn't check how the 
class loader would work because it would see 'kms-dt' and be able to map to 
both the legacy identifier from the new jar, and the only identifier from the 
old jar. But I think if we do it this way either jar would work Thoughts?

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implementation

2018-04-24 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450192#comment-16450192
 ] 

Gabor Bota commented on HADOOP-15400:
-

I would wait with starting to work on this until it's clear if we want to 
rename or not.

> Improve S3Guard documentation on Authoritative Mode implementation
> --
>
> Key: HADOOP-15400
> URL: https://issues.apache.org/jira/browse/HADOOP-15400
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Part of the design of S3Guard is support for skipping the call to S3 
> listObjects and serving directory listings out of the MetadataStore under 
> certain circumstances.  This feature is called "authoritative" mode.  I've 
> talked to many people about this feature and it seems to be universally 
> confusing.
> I suggest we improve / add a section to the s3guard.md site docs elaborating 
> on what Authoritative Mode is.
> It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth 
> in general.
> It *is* the ability to short-circuit S3 list objects and serve listings from 
> the MetadataStore in some circumstances: 
> For S3A to skip S3's list objects on some *path*, and serve it directly from 
> the MetadataStore, the following things must all be true:
>  # The MetadataStore implementation persists the bit 
> {{DirListingMetadata.isAuthorititative}} set when calling 
> {{MetadataStore#put(DirListingMetadata)}}
>  # The S3A client is configured to allow metadatastore to be authoritative 
> source of a directory listing (fs.s3a.metadatastore.authoritative=true).
>  # The MetadataStore has a full listing for *path* stored in it.  This only 
> happens if the FS client (s3a) explicitly has stored a full directory listing 
> with {{DirListingMetadata.isAuthorititative=true}} before the said listing 
> request happens.
> Note that #1 only currently happens in LocalMetadataStore. Adding support to 
> DynamoDBMetadataStore is covered in HADOOP-14154.
> Also, the multiple uses of the word "authoritative" are confusing. Two 
> meanings are used:
>  1. In the FS client configuration fs.s3a.metadatastore.authoritative
>  - Behavior of S3A code (not MetadataStore)
>  - "S3A is allowed to skip S3.list() when it has full listing from 
> MetadataStore"
> 2. MetadataStore
>  When storing a dir listing, can set a bit isAuthoritative
>  1 : "full contents of directory"
>  0 : "may not be full listing"
> Note that a MetadataStore *MAY* persist this bit. (not *MUST*).
> We should probably rename the {{DirListingMetadata.isAuthorititative}} to 
> {{.fullListing}} or at least put a comment where it is used to clarify its 
> meaning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450184#comment-16450184
 ] 

Rushabh S Shah edited comment on HADOOP-15408 at 4/24/18 4:40 PM:
--

Thanks [~jojochuang] for taking a look.
bq. Is it a valid use case?
I think yes.
If we can come up with a simple fix the IMO we should fix it.
In our cluster, there are many services (like oozie, hive) which can bundle an 
older versions of hadoop jar and they expect backwards compatibility between 
minor hadoop versions.
Would like to hear more opinions.


was (Author: shahrs87):
bq. Is it a valid use case?
I think yes.
If we can come up with a simple fix the IMO we should fix it.
In our cluster, there are many services (like oozie, hive) which can bundle an 
older versions of hadoop jar and they expect backwards compatibility between 
minor hadoop versions.
Would like to hear more opinions.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450184#comment-16450184
 ] 

Rushabh S Shah commented on HADOOP-15408:
-

bq. Is it a valid use case?
I think yes.
If we can come up with a simple fix the IMO we should fix it.
In our cluster, there are many services (like oozie, hive) which can bundle an 
older versions of hadoop jar and they expect backwards compatibility between 
minor hadoop versions.
Would like to hear more opinions.

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15408) HADOOP-14445 broke Spark.

2018-04-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450159#comment-16450159
 ] 

Wei-Chiu Chuang commented on HADOOP-15408:
--

Hi Rushabh, thanks for filing the jira.

It looks like the Java process has both versions of hadoop-common jar files, 
causing confusion. Is it a valid use case?

> HADOOP-14445 broke Spark.
> -
>
> Key: HADOOP-15408
> URL: https://issues.apache.org/jira/browse/HADOOP-15408
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.4
>Reporter: Rushabh S Shah
>Priority: Blocker
>
> Spark bundles hadoop related jars in their package.
>  Spark expects backwards compatibility between minor versions.
>  Their job failed after we deployed HADOOP-14445 in our test cluster.
> {noformat}
> 2018-04-20 21:09:53,245 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2018-04-20 21:09:53,273 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenIdentifier: Provider 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$
> KMSLegacyDelegationTokenIdentifier could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at 
> org.apache.hadoop.security.token.Token.getClassForIdentifier(Token.java:117)
> at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:138)
> at org.apache.hadoop.security.token.Token.identifierToString(Token.java:393)
> at org.apache.hadoop.security.token.Token.toString(Token.java:413)
> at java.lang.String.valueOf(String.java:2994)
> at 
> org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1634)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1583)
> Caused by: java.lang.NoSuchFieldError: TOKEN_LEGACY_KIND
> at 
> org.apache.hadoop.crypto.key.kms.KMSDelegationToken$KMSLegacyDelegationTokenIdentifier.(KMSDelegationToken.java:64)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
> ... 10 more
> 2018-04-20 21:09:53,278 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> {noformat}
> Their classpath looks like 
> {{\{...:hadoop-common-pre-HADOOP-14445.jar:.:hadoop-common-with-HADOOP-14445.jar:\}}}
> This is because the container loaded {{KMSDelegationToken}} class from an 
> older jar and {{KMSLegacyDelegationTokenIdentifier}} from new jar and it 
> fails when {{KMSLegacyDelegationTokenIdentifier}} wants to read 
> {{TOKEN_LEGACY_KIND}} from {{KMSDelegationToken}} which doesn't exist before.
>  Cc [~xiaochen]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implementation

2018-04-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450140#comment-16450140
 ] 

Steve Loughran commented on HADOOP-15400:
-

I can never spell Authoritative, so renaming it would help me there

> Improve S3Guard documentation on Authoritative Mode implementation
> --
>
> Key: HADOOP-15400
> URL: https://issues.apache.org/jira/browse/HADOOP-15400
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Part of the design of S3Guard is support for skipping the call to S3 
> listObjects and serving directory listings out of the MetadataStore under 
> certain circumstances.  This feature is called "authoritative" mode.  I've 
> talked to many people about this feature and it seems to be universally 
> confusing.
> I suggest we improve / add a section to the s3guard.md site docs elaborating 
> on what Authoritative Mode is.
> It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth 
> in general.
> It *is* the ability to short-circuit S3 list objects and serve listings from 
> the MetadataStore in some circumstances: 
> For S3A to skip S3's list objects on some *path*, and serve it directly from 
> the MetadataStore, the following things must all be true:
>  # The MetadataStore implementation persists the bit 
> {{DirListingMetadata.isAuthorititative}} set when calling 
> {{MetadataStore#put(DirListingMetadata)}}
>  # The S3A client is configured to allow metadatastore to be authoritative 
> source of a directory listing (fs.s3a.metadatastore.authoritative=true).
>  # The MetadataStore has a full listing for *path* stored in it.  This only 
> happens if the FS client (s3a) explicitly has stored a full directory listing 
> with {{DirListingMetadata.isAuthorititative=true}} before the said listing 
> request happens.
> Note that #1 only currently happens in LocalMetadataStore. Adding support to 
> DynamoDBMetadataStore is covered in HADOOP-14154.
> Also, the multiple uses of the word "authoritative" are confusing. Two 
> meanings are used:
>  1. In the FS client configuration fs.s3a.metadatastore.authoritative
>  - Behavior of S3A code (not MetadataStore)
>  - "S3A is allowed to skip S3.list() when it has full listing from 
> MetadataStore"
> 2. MetadataStore
>  When storing a dir listing, can set a bit isAuthoritative
>  1 : "full contents of directory"
>  0 : "may not be full listing"
> Note that a MetadataStore *MAY* persist this bit. (not *MUST*).
> We should probably rename the {{DirListingMetadata.isAuthorititative}} to 
> {{.fullListing}} or at least put a comment where it is used to clarify its 
> meaning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2018-04-24 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450138#comment-16450138
 ] 

lqjack commented on HADOOP-15410:
-

https://github.com/apache/hadoop/pull/366

> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Priority: Major
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15410) hoop-auth org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider org.apache.log4j package compile error

2018-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450137#comment-16450137
 ] 

ASF GitHub Bot commented on HADOOP-15410:
-

GitHub user lqjack opened a pull request:

https://github.com/apache/hadoop/pull/366

HADOOP-15410

change scope

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lqjack/hadoop HADOOP-15410

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #366


commit 2742e795b11c20a33b23033b308c927739191bce
Author: lqjaclee 
Date:   2018-04-24T16:02:52Z

HADOOP-15410

change scope




> hoop-auth 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider 
> org.apache.log4j package compile error
> --
>
> Key: HADOOP-15410
> URL: https://issues.apache.org/jira/browse/HADOOP-15410
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: lqjack
>Priority: Major
>
> When run the 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider , 
> IDE will automatic compile the java class , but unlucky org.apache.log4j 
> compile failed. 
> should change the pom.xml 
> 
>   log4j
>   log4j
>   runtime
> 
> to 
> 
>   log4j
>   log4j
>   compile
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >