[jira] [Created] (HADOOP-17710) Use SSLFactory to initialize KeyManagerFactory for LdapGroupsMapping

2021-05-18 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17710:
---

 Summary: Use SSLFactory to initialize KeyManagerFactory for 
LdapGroupsMapping 
 Key: HADOOP-17710
 URL: https://issues.apache.org/jira/browse/HADOOP-17710
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao


This was found when working on HADOOP-17699. The special handling for IBM JDK 
is not handled in LdapGroupsMapping class, that will fail in IBM JDK 
environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-18 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-17699.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-16 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17699:

Summary: Remove hardcoded SunX509 usage from SSLFactory  (was: Remove 
hardcoded "SunX509" usage from SSLFactory)

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17699) Remove hardcoded "SunX509" usage from SSLFactory

2021-05-14 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17699:
---

 Summary: Remove hardcoded "SunX509" usage from SSLFactory
 Key: HADOOP-17699
 URL: https://issues.apache.org/jira/browse/HADOOP-17699
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which is 
used to get a KeyManager/TrustManager. This KeyManager type might not be 
available if using the other JSSE providers, e.g.,  in FIPS deployment.

 
{code:java}
WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
ssl related configuration. Fall
 back to system-generic settings.
 java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not available
 at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
 at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
 at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
 at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
 at 
org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
 at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
 at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
{code}
This ticket is opened to use the DefaultAlgorithm defined by Java system 
property: 

ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2021-05-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17284:

Fix Version/s: 3.3.1

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-17284.001.patch, HADOOP-17284.002.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17608) TestKMS is flaky

2021-03-31 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17608:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~aajisaka] for the contribution. PR has been merged. 

> TestKMS is flaky
> 
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17598) Fix java doc issue introduced by HADOOP-17578

2021-03-22 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17598:

Priority: Minor  (was: Major)

> Fix java doc issue introduced by HADOOP-17578
> -
>
> Key: HADOOP-17598
> URL: https://issues.apache.org/jira/browse/HADOOP-17598
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> Remove the unused throw declaration. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17598) Fix java doc issue introduced by HADOOP-17578

2021-03-22 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17598:
---

 Summary: Fix java doc issue introduced by HADOOP-17578
 Key: HADOOP-17598
 URL: https://issues.apache.org/jira/browse/HADOOP-17598
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Remove the unused throw declaration. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17578) Improve UGI debug log to help troubleshooting TokenCache related issues

2021-03-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17578:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Improve UGI debug log to help troubleshooting TokenCache related issues
> ---
>
> Key: HADOOP-17578
> URL: https://issues.apache.org/jira/browse/HADOOP-17578
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We have seen some issues around TokenCache getDelegationToken failures even 
> though the UGI already has a valid token. The tricky part is the token map is 
> keyed by the canonical service name, which can be different from the actual 
> service field in the token, e.g. KMS token in HA case. The current UGI log 
> dumps all the tokens but not the keys of the token map. This ticket is opened 
> to include the complete token map information in the debug log.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17482) Remove Commons Logger from FileSystem Class

2021-03-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17482:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove Commons Logger from FileSystem Class
> ---
>
> Key: HADOOP-17482
> URL: https://issues.apache.org/jira/browse/HADOOP-17482
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Remove reference to Commons Logger in FileSystem, it already has SLF4J, so 
> it's a bit weird to be mixing and matching and interweaving loggers in this 
> way.  Also, my hope is to eventually migrate everything to SLF4J to simplify 
> things for downstream consumers of the common library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17581) Fix reference to LOG is ambiguous after HADOOP-17482

2021-03-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17581:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix reference to LOG is ambiguous after HADOOP-17482
> 
>
> Key: HADOOP-17581
> URL: https://issues.apache.org/jira/browse/HADOOP-17581
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HADOOP-17482 changes to have two slf4j LOG instances for FileSystem.class.  
> This seems to breaks the Hadoop CI/Jenkins as some tests using this LOG 
> directly are hitting the ambiguity issue between two slf4j Logger instances 
> to the same FileSystem.class failed the build. This ticket is opened to fix 
> those tests to unblock CI. 
>  
> {code:java}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java:[1424,25]
>  error: reference to LOG is ambiguous
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java:[102,25]
>  error: reference to LOG is ambiguous
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17581) Fix reference to LOG is ambiguous after HADOOP-17482

2021-03-11 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17581:

Status: Patch Available  (was: Open)

> Fix reference to LOG is ambiguous after HADOOP-17482
> 
>
> Key: HADOOP-17581
> URL: https://issues.apache.org/jira/browse/HADOOP-17581
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HADOOP-17482 changes to have two slf4j LOG instances for FileSystem.class.  
> This seems to breaks the Hadoop CI/Jenkins as some tests using this LOG 
> directly are hitting the ambiguity issue between two slf4j Logger instances 
> to the same FileSystem.class failed the build. This ticket is opened to fix 
> those tests to unblock CI. 
>  
> {code:java}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java:[1424,25]
>  error: reference to LOG is ambiguous
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java:[102,25]
>  error: reference to LOG is ambiguous
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17482) Remove Commons Logger from FileSystem Class

2021-03-11 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300011#comment-17300011
 ] 

Xiaoyu Yao commented on HADOOP-17482:
-

I opened HADOOP-17581 to unblock CI by changing the affected tests not using 
FileSystem.LOG directly. 

> Remove Commons Logger from FileSystem Class
> ---
>
> Key: HADOOP-17482
> URL: https://issues.apache.org/jira/browse/HADOOP-17482
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Remove reference to Commons Logger in FileSystem, it already has SLF4J, so 
> it's a bit weird to be mixing and matching and interweaving loggers in this 
> way.  Also, my hope is to eventually migrate everything to SLF4J to simplify 
> things for downstream consumers of the common library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17581) Fix reference to LOG is ambiguous after HADOOP-17482

2021-03-11 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17581:
---

 Summary: Fix reference to LOG is ambiguous after HADOOP-17482
 Key: HADOOP-17581
 URL: https://issues.apache.org/jira/browse/HADOOP-17581
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HADOOP-17482 changes to have two slf4j LOG instances for FileSystem.class.  
This seems to breaks the Hadoop CI/Jenkins as some tests using this LOG 
directly are hitting the ambiguity issue between two slf4j Logger instances to 
the same FileSystem.class failed the build. This ticket is opened to fix those 
tests to unblock CI. 

 
{code:java}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java:[1424,25]
 error: reference to LOG is ambiguous
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java:[102,25]
 error: reference to LOG is ambiguous
[INFO] 2 errors 
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17482) Remove Commons Logger from FileSystem Class

2021-03-11 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1730#comment-1730
 ] 

Xiaoyu Yao commented on HADOOP-17482:
-

Agree with [~ste...@apache.org] mentioned above. The Jira is open but PR has 
been merged.

The merged change breaks the Hadoop CI/Jenkins as now the ambiguity issue 
between two slf4j Logger instances to the same FileSystem.class failed the 
build. 

Previously log instances seem hacky with as one from common logger and one from 
slf4j. 
{code:java}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java:[1424,25]
 error: reference to LOG is ambiguous
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2762/src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java:[102,25]
 error: reference to LOG is ambiguous
[INFO] 2 errors 

{code}

> Remove Commons Logger from FileSystem Class
> ---
>
> Key: HADOOP-17482
> URL: https://issues.apache.org/jira/browse/HADOOP-17482
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Remove reference to Commons Logger in FileSystem, it already has SLF4J, so 
> it's a bit weird to be mixing and matching and interweaving loggers in this 
> way.  Also, my hope is to eventually migrate everything to SLF4J to simplify 
> things for downstream consumers of the common library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17578) Improve UGI debug log to help troubleshooting TokenCache related issues

2021-03-11 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17578:

Status: Patch Available  (was: Open)

> Improve UGI debug log to help troubleshooting TokenCache related issues
> ---
>
> Key: HADOOP-17578
> URL: https://issues.apache.org/jira/browse/HADOOP-17578
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have seen some issues around TokenCache getDelegationToken failures even 
> though the UGI already has a valid token. The tricky part is the token map is 
> keyed by the canonical service name, which can be different from the actual 
> service field in the token, e.g. KMS token in HA case. The current UGI log 
> dumps all the tokens but not the keys of the token map. This ticket is opened 
> to include the complete token map information in the debug log.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17578) Improve UGI debug log to help troubleshooting TokenCache related issues

2021-03-11 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17578:

Summary: Improve UGI debug log to help troubleshooting TokenCache related 
issues  (was: Improve UGI debug log to help troubleshoot TokenCache related 
issues)

> Improve UGI debug log to help troubleshooting TokenCache related issues
> ---
>
> Key: HADOOP-17578
> URL: https://issues.apache.org/jira/browse/HADOOP-17578
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> We have seen some issues around TokenCache getDelegationToken failures even 
> though the UGI already has a valid token. The tricky part is the token map is 
> keyed by the canonical service name, which can be different from the actual 
> service field in the token, e.g. KMS token in HA case. The current UGI log 
> dumps all the tokens but not the keys of the token map. This ticket is opened 
> to include the complete token map information in the debug log.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17578) Improve UGI debug log to help troubleshoot TokenCache related issues

2021-03-11 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17578:
---

 Summary: Improve UGI debug log to help troubleshoot TokenCache 
related issues
 Key: HADOOP-17578
 URL: https://issues.apache.org/jira/browse/HADOOP-17578
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


We have seen some issues around TokenCache getDelegationToken failures even 
though the UGI already has a valid token. The tricky part is the token map is 
keyed by the canonical service name, which can be different from the actual 
service field in the token, e.g. KMS token in HA case. The current UGI log 
dumps all the tokens but not the keys of the token map. This ticket is opened 
to include the complete token map information in the debug log.  





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2021-01-26 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272430#comment-17272430
 ] 

Xiaoyu Yao commented on HADOOP-17079:
-

Thanks [~daryn] for the comments. Here are my thoughts on adding a new method 
for GroupCacheLoader#getGroupsSet. 

Many GroupMappingServiceProvider implementations have already used Set 
internally (e.g., LdapGroupsMapping#lookupGroup) or use additional step to 
dedup the list (e.g., ShellBasedUnixGroupsMapping). It is expensive to convert 
between Set and List back-and-forth with the the existing list-based 
getGroups() method in GroupMappingServiceProvider interface . 

Can you elaborate the proposal to change GroupCacheLoader#load? Can we avoid 
the two conversions?
Set -> List ((GroupMappingServiceProvider Impl)) 
and List->Set (GroupCacheLoader). 

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, 
> HADOOP-17079.007.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2021-01-19 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268259#comment-17268259
 ] 

Xiaoyu Yao commented on HADOOP-17079:
-

Good catch, [~ahussein]. This only affect deployment with 
hadoop.security.group.mapping=ShellBasedUnixGroupsNetgroupMapping or 
JniBasedUnixGroupsNetgroupMapping. 

I will help review HADOOP-17467 to get them fixed. 

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, 
> HADOOP-17079.007.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-13 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213171#comment-17213171
 ] 

Xiaoyu Yao commented on HADOOP-17304:
-

Thanks [~hexiaoqiao] for the review. Moved the const to KMSACLs as suggested in 
002 patch. 

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17304.001.patch, HADOOP-17304.002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the issue caught by TestKMS.testACLs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17304:

Attachment: HADOOP-17304.002.patch

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17304.001.patch, HADOOP-17304.002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the issue caught by TestKMS.testACLs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17304:

Status: Patch Available  (was: Open)

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17304.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the issue caught by TestKMS.testACLs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17304:

Attachment: HADOOP-17304.001.patch

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17304.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the issue caught by TestKMS.testACLs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-10-12 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212819#comment-17212819
 ] 

Xiaoyu Yao edited comment on HADOOP-17208 at 10/13/20, 3:57 AM:


I agree. With HADOOP-17304, it will be needed to expose additional 
INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to 
validate this. Please help checking the PR there and the test is kept as-is 
without adding additional ACLs. 


was (Author: xyao):
I agree. With HADOOP-17304, it will be needed to expose additional 
INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to 
validate this 

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-10-12 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212819#comment-17212819
 ] 

Xiaoyu Yao edited comment on HADOOP-17208 at 10/13/20, 3:56 AM:


I agree. With HADOOP-17304, it will be needed to expose additional 
INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to 
validate this 


was (Author: xyao):
I agree. With HADOOP-17304, this will not be no need to expose additional 
INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to 
validate this 

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-10-12 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212819#comment-17212819
 ] 

Xiaoyu Yao commented on HADOOP-17208:
-

I agree. With HADOOP-17304, this will not be no need to expose additional 
INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to 
validate this 

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17304:

Description: HADOOP-17208 send invalidate cache for key being deleted. The 
invalidate cache operation itself requires ROLLOVER permission on the key. This 
ticket is opened to fix the issue caught by TestKMS.testACLs.  (was: 
HADOOP-17208 send invalidate cache for key being deleted. The invalidate cache 
operation itself requires ROLLOVER permission on the key. This ticket is opened 
to fix the test. )

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the issue caught by TestKMS.testACLs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache

2020-10-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17304:

Summary: KMS ACL: Allow DeleteKey Operation to Invalidate Cache  (was: Fix 
TestKMS.testACLs)

> KMS ACL: Allow DeleteKey Operation to Invalidate Cache
> --
>
> Key: HADOOP-17304
> URL: https://issues.apache.org/jira/browse/HADOOP-17304
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> HADOOP-17208 send invalidate cache for key being deleted. The invalidate 
> cache operation itself requires ROLLOVER permission on the key. This ticket 
> is opened to fix the test. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17304) Fix TestKMS.testACLs

2020-10-12 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17304:
---

 Summary: Fix TestKMS.testACLs
 Key: HADOOP-17304
 URL: https://issues.apache.org/jira/browse/HADOOP-17304
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HADOOP-17208 send invalidate cache for key being deleted. The invalidate cache 
operation itself requires ROLLOVER permission on the key. This ticket is opened 
to fix the test. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-10-12 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212623#comment-17212623
 ] 

Xiaoyu Yao commented on HADOOP-17208:
-

Good catch [~ayushtkn]. And thanks [~hexiaoqiao] for looking into this. 

The original design of the INVALIDATE_CACHE op is tied to ROLLOVER ACL. The 
test itself can be fixed by allowing "DELETE" user to have ROLLOVER just like 
SET_KEY_MATERIAL does. 

conf.set(KMSACLs.Type.ROLLOVER.getAclConfigKey(),
KMSACLs.Type.ROLLOVER.toString() + ",SET_KEY_MATERIAL,DELETE");

It would be much clean if we can have a separate INVALIDATE_CACHE ACL type to 
differentiate INVALIDATE_CACHE other than ROLLOVER itself like SET_KEY_MATERIAL 
and DELETE. 

 


> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-29 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17284:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17284.001.patch, HADOOP-17284.002.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-29 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17284:

Attachment: HADOOP-17284.002.patch

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17284.001.patch, HADOOP-17284.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-28 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17284:

Attachment: HADOOP-17284.001.patch

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17284.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202394#comment-17202394
 ] 

Xiaoyu Yao edited comment on HADOOP-17284 at 9/25/20, 7:29 PM:
---

https://github.com/apache/hadoop/pull/2334.patch


was (Author: xyao):
https://patch-diff.githubusercontent.com/raw/apache/hadoop/pull/2334.patch

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202394#comment-17202394
 ] 

Xiaoyu Yao commented on HADOOP-17284:
-

https://patch-diff.githubusercontent.com/raw/apache/hadoop/pull/2334.patch

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-24 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17284:

Status: Patch Available  (was: Open)

> Support BCFKS keystores for Hadoop Credential Provider
> --
>
> Key: HADOOP-17284
> URL: https://issues.apache.org/jira/browse/HADOOP-17284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hadoop Credential Provider provides an extensible mechanism to manage 
> sensitive tokens like passwords for the cluster. It currently only support 
> JCEKS store type from JDK. 
> This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store 
> type for some higher security requirement use cases assuming OS/JDK has been 
> updated with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17284) Support BCFKS keystores for Hadoop Credential Provider

2020-09-24 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17284:
---

 Summary: Support BCFKS keystores for Hadoop Credential Provider
 Key: HADOOP-17284
 URL: https://issues.apache.org/jira/browse/HADOOP-17284
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Hadoop Credential Provider provides an extensible mechanism to manage sensitive 
tokens like passwords for the cluster. It currently only support JCEKS store 
type from JDK. 

This ticket is opened to add support BCFKS (Bouncy Castle FIPS) key store type 
for some higher security requirement use cases assuming OS/JDK has been updated 
with FIPS security provider for Java Security. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-21 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-17259.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Thanks [~ste...@apache.org] for the review. The PR has been merged. 

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-09-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17208:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks all for the reviews and discussions. I've merged the change. 

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-14 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195663#comment-17195663
 ] 

Xiaoyu Yao commented on HADOOP-17259:
-

cc: [~weichiu] 

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-11 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17259:

Summary: Allow SSLFactory fallback to input config if ssl-*.xml fail to 
load from classpath  (was: SSLFactory should fallback to input config if 
ssl-*.xml fail to load from classpath)

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17259) SSLFactory should fallback to input config if ssl-*.xml fail to load from classpath

2020-09-11 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17259:
---

 Summary: SSLFactory should fallback to input config if ssl-*.xml 
fail to load from classpath
 Key: HADOOP-17259
 URL: https://issues.apache.org/jira/browse/HADOOP-17259
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.5
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
classpath. Instead, it directly pass the parsed SSL configuration as the input 
configuration object. This ticket is opened to allow this case. TEZ-4096 
attempts to solve this issue but but take a different approach which may not 
work in existing Hadoop clients that use SSLFactory from hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-08-28 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17208:

Status: Patch Available  (was: Open)

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache for all the KMSClientProvider instances

2020-08-14 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17208:
---

 Summary: LoadBalanceKMSClientProvider#deleteKey should 
invalidateCache for all the KMSClientProvider instances
 Key: HADOOP-17208
 URL: https://issues.apache.org/jira/browse/HADOOP-17208
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.4
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Without invalidateCache, the deleted key may still exists in the servers' key 
cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not hit. 
Client may still be able to access encrypted files by specifying to connect to 
KMS instances with a cached version of the deleted key before the cache entry 
(10 min by default) expired. 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-08-14 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17208:

Summary: LoadBalanceKMSClientProvider#deleteKey should invalidateCache via 
all KMSClientProvider instances  (was: LoadBalanceKMSClientProvider#deleteKey 
should invalidateCache for all the KMSClientProvider instances)

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17162) Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled

2020-07-28 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17162:
---

 Summary: Ozone /conf endpoint trigger kerberos replay error when 
SPNEGO is enabled 
 Key: HADOOP-17162
 URL: https://issues.apache.org/jira/browse/HADOOP-17162
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nilotpal Nandi
Assignee: Xiaoyu Yao


{code}
curl  -k --negotiate -X GET -u : 
"https://quasar-jsajkc-8.quasar-jsajkc.root.hwx.site:9877/conf;



Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
level: Request is a replay (34))

HTTP ERROR 403 GSSException: Failure unspecified at GSS-API level 
(Mechanism level: Request is a replay (34))

URI:/conf
STATUS:403
MESSAGE:GSSException: Failure unspecified at GSS-API level 
(Mechanism level: Request is a replay (34))
SERVLET:conf




{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2020-07-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-15518:

Attachment: HADOOP-15518.002.patch

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch, HADOOP-15518.002.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2020-07-17 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160153#comment-17160153
 ] 

Xiaoyu Yao commented on HADOOP-15518:
-

Thanks [~jnp] for the heads up. I took a look at the patch and it looks good to 
me. Attach a rebased patch to the latest trunk. Given [~eyang][~sunilg] have 
looked at this years back, I will leave this for their feedbacks. 

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-07-09 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
 Release Note: 
Added a UserGroupMapping#getGroupsSet() API and deprecate 
UserGroupMapping#getGroups.

The UserGroupMapping#getGroups() can be expensive as it involves Set->List 
conversion. For user with large group membership (i.e., > 1000 groups), we 
recommend using getGroupSet to avoid the conversion and fast membership look up.

  was:
Added a UserGroupMapping#getGroupsSet() API.

The UserGroupMapping#getGroups() can be expensive as it involves Set->List 
conversion. For user with large group membership (i.e., > 1000 groups), we 
recommend using getGroupSet to avoid the conversion and fast membership look up.

   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, 
> HADOOP-17079.007.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-07-08 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143104#comment-17143104
 ] 

Xiaoyu Yao edited comment on HADOOP-17079 at 7/8/20, 6:07 PM:
--

https://github.com/apache/hadoop/pull/2085


was (Author: xyao):
https://github.com/apache/hadoop/pull/2085.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, 
> HADOOP-17079.007.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-29 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.007.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, 
> HADOOP-17079.007.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-27 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.006.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-26 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.005.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch, HADOOP-17079.005.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-26 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.004.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, 
> HADOOP-17079.004.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-25 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.003.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-24 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.002.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-24 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144288#comment-17144288
 ] 

Xiaoyu Yao commented on HADOOP-17079:
-

Attach the patch file to trigger Jenkins. The PR link somehow does not work for 
me. 

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-23 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143104#comment-17143104
 ] 

Xiaoyu Yao commented on HADOOP-17079:
-

https://github.com/apache/hadoop/pull/2085.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-21 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Description: UGI#getGroups has been optimized with HADOOP-13442 by avoiding 
the List->Set->List conversion. However the returned list is not optimized to 
contains lookup, especially the user's group membership list is huge 
(thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up while 
minimize List->Set conversions in Groups#getGroups() call.   (was: 
UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
List->Set->List conversion. However the returned list is not optimized to 
contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up while 
minimize List->Set conversions in Groups#getGroups() call. )

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-21 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Status: Patch Available  (was: Open)

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-20 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Description: UGI#getGroups has been optimized with HADOOP-13442 by avoiding 
the List->Set->List conversion. However the returned list is not optimized to 
contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up while 
minimize List->Set conversions in Groups#getGroups() call.   (was: 
UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
List->Set->List conversion. However the returned list is not optimized to 
contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up. )

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-20 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17079:
---

 Summary: Optimize UGI#getGroups by adding UGI#getGroupsSet
 Key: HADOOP-17079
 URL: https://issues.apache.org/jira/browse/HADOOP-17079
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
List->Set->List conversion. However the returned list is not optimized to 
contains lookup. This ticket is opened to add a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch

2020-06-02 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16828:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~fengnanli] for the contribution and all for the reviews. I've commit 
the patch to trunk. 

> Zookeeper Delegation Token Manager fetch sequence number by batch
> -
>
> Key: HADOOP-16828
> URL: https://issues.apache.org/jira/browse/HADOOP-16828
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen 
> Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, 
> Screen Shot 2020-01-25 at 2.25.24 PM.png
>
>
> Currently in ZKDelegationTokenSecretManager.java the seq number is 
> incremented by 1 each time there is a request for creating new token. This 
> will need to send traffic to Zookeeper server. With multiple managers 
> running, there is data contention going on. Also, since the current logic of 
> incrementing is using tryAndSet which is optimistic concurrency control 
> without locking. This data contention is having performance degradation when 
> the secret manager are under volume of traffic.
> The change here is to fetching this seq number by batch instead of 1, which 
> will reduce the traffic sent to ZK and make many operations inside ZK secret 
> manager's memory.
> After putting this into production we saw huge improvement to the RPC 
> processing latency of get delegationtoken calls. Also, since ZK takes less 
> traffic in this way. Other write calls, like renew and cancel delegation 
> tokens are benefiting from this change.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch

2020-06-02 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124208#comment-17124208
 ] 

Xiaoyu Yao commented on HADOOP-16828:
-

Patch v2 LGTM, +1. I will merge it shortly. 

> Zookeeper Delegation Token Manager fetch sequence number by batch
> -
>
> Key: HADOOP-16828
> URL: https://issues.apache.org/jira/browse/HADOOP-16828
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen 
> Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, 
> Screen Shot 2020-01-25 at 2.25.24 PM.png
>
>
> Currently in ZKDelegationTokenSecretManager.java the seq number is 
> incremented by 1 each time there is a request for creating new token. This 
> will need to send traffic to Zookeeper server. With multiple managers 
> running, there is data contention going on. Also, since the current logic of 
> incrementing is using tryAndSet which is optimistic concurrency control 
> without locking. This data contention is having performance degradation when 
> the secret manager are under volume of traffic.
> The change here is to fetching this seq number by batch instead of 1, which 
> will reduce the traffic sent to ZK and make many operations inside ZK secret 
> manager's memory.
> After putting this into production we saw huge improvement to the RPC 
> processing latency of get delegationtoken calls. Also, since ZK takes less 
> traffic in this way. Other write calls, like renew and cancel delegation 
> tokens are benefiting from this change.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch

2020-03-02 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049589#comment-17049589
 ] 

Xiaoyu Yao commented on HADOOP-16828:
-

Thanks [~fengnanli] for reporting the issue and provide the patch. The patch 
LGTM overall. The performance improvement is impressive. Here are a few minor 
comments.

ZKDelegationTokenSecretManager.java

Line:100 NIT: can we add a token as part of the prefix for the new key?
i.e. "token.seqnum.batch.size"

Line 559: getDelegationTokenSeqNum() this function needs to be changed as 
the delTokenSeqCounter.getCount() will be updated in batch. We should return 
currentSeqNum here instead.

TestZKDelegationTokenSecretManager.java
As shown in the test, if the batch size is large, say 1000, this might leave 
holes in the sequence number
when KMS failover. It might be an acceptable tradeoff. 

Please ensure the DTSM instances (tm1, tm2) are properly destroyed after the 
test by calling verifyDestroy(). 


> Zookeeper Delegation Token Manager fetch sequence number by batch
> -
>
> Key: HADOOP-16828
> URL: https://issues.apache.org/jira/browse/HADOOP-16828
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HADOOP-16828.001.patch, Screen Shot 2020-01-25 at 
> 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, Screen Shot 
> 2020-01-25 at 2.25.24 PM.png
>
>
> Currently in ZKDelegationTokenSecretManager.java the seq number is 
> incremented by 1 each time there is a request for creating new token. This 
> will need to send traffic to Zookeeper server. With multiple managers 
> running, there is data contention going on. Also, since the current logic of 
> incrementing is using tryAndSet which is optimistic concurrency control 
> without locking. This data contention is having performance degradation when 
> the secret manager are under volume of traffic.
> The change here is to fetching this seq number by batch instead of 1, which 
> will reduce the traffic sent to ZK and make many operations inside ZK secret 
> manager's memory.
> After putting this into production we saw huge improvement to the RPC 
> processing latency of get delegationtoken calls. Also, since ZK takes less 
> traffic in this way. Other write calls, like renew and cancel delegation 
> tokens are benefiting from this change.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16884) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-26 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16884:

Status: Patch Available  (was: Open)

> Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped 
> stream
> ---
>
> Key: HADOOP-16884
> URL: https://issues.apache.org/jira/browse/HADOOP-16884
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
> file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
> opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17044805#comment-17044805
 ] 

Xiaoyu Yao commented on HADOOP-16885:
-

cc: [~ste...@apache.org] and [~weichiu]

> Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped 
> stream
> ---
>
> Key: HADOOP-16885
> URL: https://issues.apache.org/jira/browse/HADOOP-16885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
> file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
> opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17044736#comment-17044736
 ] 

Xiaoyu Yao commented on HADOOP-16885:
-

Similar issue exist with WebHdfsHandler#onCreate and RpcProgramNfs3#create, 
will open separate HDFS JIRAs for the fix. 

> Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped 
> stream
> ---
>
> Key: HADOOP-16885
> URL: https://issues.apache.org/jira/browse/HADOOP-16885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
> file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
> opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17044689#comment-17044689
 ] 

Xiaoyu Yao commented on HADOOP-16885:
-

Repro steps (Thanks Olivér Dózsa)
kinit as hdfs
Try to copy to encrypted zone directory
hdfs dfs -cp /tmp/kms_text_file.txt 
/kms_test/encrypted_dirs/test_dir/kms_text_file.txt
Observe that user hdfs doesn't have permission to do decrypt EEK. (as expected)
On HDP 3.1.5.0-152, the following can be seen:
 Failed to close file: 
/kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_ with inode: 18159
 org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
does not exist: /kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_ 
(inode 18159) Holder DFSClient_NONMAPREDUCE_1857410465_1 does not have any open 
files.
Execute

hdfs dfs -ls /kms_test/encrypted_dirs/test_dir/
and observe there's *no*

kms_text_file.txt._COPYING_
file present.

On HDP 7.1.0.1000-7, no error message can be seen.
Execute
hdfs dfs -ls /kms_test/encrypted_dirs/test_dir/
and observe there's a

kms_text_file.txt._COPYING_
file present.

kinit as user1 (kinit -k -t /home/hrt_qa/hadoopqa/keytabs/user1.headless.keytab 
user1)
Try to copy file to encrypted directory again
hdfs dfs -cp /tmp/kms_text_file.txt 
/kms_test/encrypted_dirs/test_dir/kms_text_file.txt
The following happens:
On HDP 3.1.5.0-152 it succeeds, no error message is shown.
On HDP 7.1.0.1000-7 the operation fails with
cp: Permission denied: user=user1, access=WRITE, 
inode="/kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_":hdfs:hdfs:-rw-r--r--
Expected behavior
Step 5. should succeed. No file with

_COPYING_
suffix should be created when user with no permission tries to copy to a 
restricted directory.

> Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped 
> stream
> ---
>
> Key: HADOOP-16885
> URL: https://issues.apache.org/jira/browse/HADOOP-16885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
> file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
> opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-16885:
---

 Summary: Encryption zone file copy failure leaks temp file 
._COPYING_ and wrapped stream
 Key: HADOOP-16885
 URL: https://issues.apache.org/jira/browse/HADOOP-16885
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16885:

Affects Version/s: 3.3.0

> Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped 
> stream
> ---
>
> Key: HADOOP-16885
> URL: https://issues.apache.org/jira/browse/HADOOP-16885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
> file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
> opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16884) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream

2020-02-25 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-16884:
---

 Summary: Encryption zone file copy failure leaks temp file 
._COPYING_ and wrapped stream
 Key: HADOOP-16884
 URL: https://issues.apache.org/jira/browse/HADOOP-16884
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiaoyu Yao


Copy file into encryption on  trunk with HADOOP-16490 caused a leaking temp 
file _COPYING_ left and potential wrapped stream unclosed. This ticked is 
opened to track the fix for it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16761) KMSClientProvider does not work with client using ticket logged in externally

2019-12-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16761:

Status: Patch Available  (was: Open)

> KMSClientProvider does not work with client using ticket logged in externally 
> --
>
> Key: HADOOP-16761
> URL: https://issues.apache.org/jira/browse/HADOOP-16761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is a regression from HDFS-13682 that checks not only the kerberos 
> credential but also enforce the login is non-external. This breaks client 
> applications that need to access HDFS encrypted file using kerberos ticket 
> that logged in external in ticket cache. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16761) KMSClientProvider does not work with client using ticket logged in externally

2019-12-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-15061 to HADOOP-16761:


Key: HADOOP-16761  (was: HDFS-15061)
Project: Hadoop Common  (was: Hadoop HDFS)

> KMSClientProvider does not work with client using ticket logged in externally 
> --
>
> Key: HADOOP-16761
> URL: https://issues.apache.org/jira/browse/HADOOP-16761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is a regression from HDFS-13682 that checks not only the kerberos 
> credential but also enforce the login is non-external. This breaks client 
> applications that need to access HDFS encrypted file using kerberos ticket 
> that logged in external in ticket cache. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-11-18 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976954#comment-16976954
 ] 

Xiaoyu Yao commented on HADOOP-15686:
-

Thanks [~weichiu] for the update. Patch v4 LGTM, +1.

There is a minor checkstyle issue which you can fix at commit. 

 

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch, 
> HADOOP-15686.003.patch, HADOOP-15686.004.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
>   at 

[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-11-12 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972862#comment-16972862
 ] 

Xiaoyu Yao commented on HADOOP-15686:
-

Thanks [~weichiu]  for the update. The latest patch LGTM, I verified TestKMS 
does not output bogus log messages after the latest patch.

Can you add some verification in TestKMS to capture the LOG output and make 
sure it does not contain bogus log message in the description. This way, if 
someone break it accidentally in future, we will be able to catch it as part of 
the regression test.

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch, 
> HADOOP-15686.003.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>   at 
> 

[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-15 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952214#comment-16952214
 ] 

Xiaoyu Yao commented on HADOOP-15169:
-

Agree, thanks [~weichiu]. +1.

There is one checkstyle issue that you can fix at commit.

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.003.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-14 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951411#comment-16951411
 ] 

Xiaoyu Yao commented on HADOOP-15169:
-

Thanks [~weichiu] for v3 patch. It looks good to me. One minor comments: In 
line 551, we use equals to compare the protocol strings, do we need to handle 
the case when the orders are different but the protocols are same?
|551|if (!enabledProtocols.equals(SSLFactory.SSL_ENABLED_PROTOCOLS_DEFAULT)) {|

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.003.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-11 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949694#comment-16949694
 ] 

Xiaoyu Yao commented on HADOOP-15169:
-

Thanks [~brahmareddy] and [~weichiu] for the patch. It looks good to me overall.

I just have one suggestion w.r.t. the handling of the excluded protocols. By 
default SslContextFactory will set the following ("SSL", "SSLv2", "SSLv2Hello", 
"SSLv3") to the excluded protocol. 

Instead of always reset the excluded protocol to empty, we should remove only 
those contained in the enabledProtocols from the excluded protocol. This way, 
we don't allow weak protocols not in the enable list.

Please also add a test case to ensure if use add SSLv2Hello to included 
protocol, SSL/SSLv2/SSLv3 should not be allowed.

 

 

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868937#comment-16868937
 ] 

Xiaoyu Yao edited comment on HADOOP-16350 at 6/20/19 9:54 PM:
--

Thanks [~gss2002] for reporting the issue and [~ajayydv] [~arp] [~szetszwo] for 
the discussion. I agree with [~ajayydv] that [~gss2002]'s earlier patch has 
missed one case where the local KMS config should still be honored even we 
configured to ignore remote KMS. 

However, with a single boolean, how to we differentiate local vs remote NN and 
their KMS configuration? With the new Hadoop clients where the recommendation 
is to be configureless wrt. local kms and letting NN tell the right KMS uri. 
This may be an issue as we will ignore all KMS returned from NNs, no matter is 
is local or remote. 


was (Author: xyao):
Thanks [~gss2002] for reporting the issue and [~ajayydv] and [~szetszwo] for 
the discussion. I agree with [~ajayydv] that [~gss2002]'s earlier patch has 
missed one case where the local KMS config should still be honored even we 
configured to ignore remote KMS. 

However, with a single boolean, how to we differentiate local vs remote NN and 
their KMS configuration? With the new Hadoop clients where the recommendation 
is to be configureless wrt. local kms and letting NN tell the right KMS uri. 
This may be an issue as we will ignore all KMS returned from NNs, no matter is 
is local or remote. 

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch, 
> HADOOP-16350.02.patch, HADOOP-16350.03.patch, HADOOP-16350.04.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 

[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868937#comment-16868937
 ] 

Xiaoyu Yao commented on HADOOP-16350:
-

Thanks [~gss2002] for reporting the issue and [~ajayydv] and [~szetszwo] for 
the discussion. I agree with [~ajayydv] that [~gss2002]'s earlier patch has 
missed one case where the local KMS config should still be honored even we 
configured to ignore remote KMS. 

However, with a single boolean, how to we differentiate local vs remote NN and 
their KMS configuration? With the new Hadoop clients where the recommendation 
is to be configureless wrt. local kms and letting NN tell the right KMS uri. 
This may be an issue as we will ignore all KMS returned from NNs, no matter is 
is local or remote. 

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch, 
> HADOOP-16350.02.patch, HADOOP-16350.03.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file 

[jira] [Comment Edited] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-02 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807870#comment-16807870
 ] 

Xiaoyu Yao edited comment on HADOOP-16231 at 4/2/19 3:46 PM:
-

Thanks [~knanasi] for reporting the issue and fixed it. Usually, the default 
log4j.properties set the default log level to INFO. If that is the case, 
changing the log level in LBKMSCP from WARN to INFO may not reduce the amount 
of log produced as expected. 


was (Author: xyao):
Thanks [~knanasi] for reporting the issue and fixed it. Patch LGTM, +1 and I 
will commit it shortly.

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-02 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-14404 to HADOOP-16231:


Affects Version/s: (was: 3.2.0)
   3.2.0
  Component/s: (was: kms)
   kms
  Key: HADOOP-16231  (was: HDFS-14404)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16199:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~jojochuang] for the review. I just commit the patch to trunk. Will 
backport to 3.2, 3.1, 3.0 where HADOOP-14445 is included.  

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: kms
> Fix For: 3.3.0
>
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803459#comment-16803459
 ] 

Xiaoyu Yao commented on HADOOP-16199:
-

{quote}The added test is almost the same as 
testTokenServiceCreationWithUriFormat, added in HADOOP-15997, except that it 
configured key provider explicitly.
{quote}
Yes. That's a valid client configuration where client just downloaded the 
configuration from the same configuration used by Ambari/CM managed cluster, 
where hadoop.security.key.provider.path=kms://http@kms1;kms2:9600/kms
{quote}bq. After HADOOP-14445, if configuring KMS provide path explicitly for 
client, the expected behavior is: the client gets a kms dt whose credential 
alias is one of (randomly selected) KMS URI.
{quote}
The following code in LoadBalanceKMSCLientProvider#getDelegationToken was added 
by HADOOP-14445 to set the token service field to the the KMS URI so that it 
can be used across all instances. Check the KMSUtil#createKeyProvider and 
HdfsKMSUtil.createKeyProvider the uri configured above will be the one set into 
the token service field by LoadBalanceKMSCLientProvider. 

{code}

public Token getDelegationToken(String renewer) throws IOException {
  return doOp(new ProviderCallable>() {
    @Override
    public Token call(KMSClientProvider provider) throws IOException {
     Token token = provider.getDelegationToken(renewer);
      // override sub-providers service with our own so it can be used
      // across all providers.
      token.setService(dtService); 
      LOG.debug("New token service set. Token: ({})", token);
      return token;
  }

{code}

 

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: kms
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16199:

Status: Patch Available  (was: Open)

Submit a patch that repros the issue along with the fix and verification.

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: kms
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16199:

Affects Version/s: 3.0.2

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795315#comment-16795315
 ] 

Xiaoyu Yao edited comment on HADOOP-16199 at 3/18/19 7:08 PM:
--

As can be seen below, a kms-dt with service field *Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]*
 can't be selected for LoadBalancingKMSClientProvider because it does not match 
its *canonical service: 172.25.36.130:9292*. Subsequent matching with 
individual KMSClientProvider also failed in this case.  

The comments on reason of hard code canonical service of 
LoadBalancingKMSClientProvider to the ip:port of the first KMSClientProvider 
instance can be improved.

 

The proposed fix it to allow 
LoadBalancingKMSClientProvider#selectDelegationToken to match not only the 
canonical service but also the delegation token service, which is similar to 
what we have done in KMSClientProvider#selectDelegationToken.


Below is detailed failure log for reference:
{code:java}
2019-03-13 18:51:33,056 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getServerDefaults took 5ms
2019-03-13 18:51:33,086 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node3.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node3].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.36.130:9292.
2019-03-13 18:51:33,087 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node4.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node4].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.38.4:9292.
2019-03-13 18:51:33,089 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider: Created 
LoadBalancingKMSClientProvider for KMS url: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]
 with 2 providers. delegation token service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 canonical service: 172.25.36.130:9292
...
2019-03-13 18:51:33,112 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: Current UGI: hr1 
(auth:SIMPLE)
2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: Localizer, 
Service: , Ident: 
(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.security.LocalizerTokenIdentifier@54604a95)
*2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: kms-dt, 
Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 Ident: (kms-dt owner=hr1, renewer=yarn, realUser=oozie, 
issueDate=1552503090542, maxDate=1553107890542, sequenceNumber=27, 
masterKeyId=30)*
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 172.25.35.133:8020, Ident: (token for hr1: 
HDFS_DELEGATION_TOKEN owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090263, maxDate=1553107890263, sequenceNumber=443, 
masterKeyId=93)
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.security.token.Token: Cannot find class for token kind 
HIVE_DELEGATION_TOKEN
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HIVE_DELEGATION_TOKEN, Service: hiveserver2ClientToken, Ident: 00 03 68 72 31 
04 68 69 76 65 05 6f 6f 7a 69 65 8a 01 69 78 65 2b a8 8a 01 69 9c 71 af a8 03 
8f 84
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
RM_DELEGATION_TOKEN, Service: 172.25.35.133:8050, Ident: (RM_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090238, maxDate=1553107890238, sequenceNumber=21, 
masterKeyId=139)
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
MR_DELEGATION_TOKEN, Service: 172.25.35.133:10020, Ident: (MR_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090488, 

[jira] [Updated] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16199:

Labels: kms  (was: )

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: kms
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795315#comment-16795315
 ] 

Xiaoyu Yao edited comment on HADOOP-16199 at 3/18/19 7:05 PM:
--

As can be seen below, a kms-dt with service field *Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]*
 can't be selected for LoadBalancingKMSClientProvider because it does not match 
its *canonical service: 172.25.36.130:9292*. Subsequent matching with 
individual KMSClientProvider also failed in this case.  The proposed fix it to 
allow LoadBalancingKMSClientProvider#selectDelegationToken to match not only 
the canonical service but also the delegation token service.

Also, the comments on reason of hard code canonical service of 
LoadBalancingKMSClientProvider to the ip:port of the first KMSClientProvider 
instance can be improved.

Below is detailed failure log for reference:
{code:java}
2019-03-13 18:51:33,056 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getServerDefaults took 5ms
2019-03-13 18:51:33,086 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node3.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node3].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.36.130:9292.
2019-03-13 18:51:33,087 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node4.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node4].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.38.4:9292.
2019-03-13 18:51:33,089 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider: Created 
LoadBalancingKMSClientProvider for KMS url: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]
 with 2 providers. delegation token service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 canonical service: 172.25.36.130:9292
...
2019-03-13 18:51:33,112 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: Current UGI: hr1 
(auth:SIMPLE)
2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: Localizer, 
Service: , Ident: 
(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.security.LocalizerTokenIdentifier@54604a95)
*2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: kms-dt, 
Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 Ident: (kms-dt owner=hr1, renewer=yarn, realUser=oozie, 
issueDate=1552503090542, maxDate=1553107890542, sequenceNumber=27, 
masterKeyId=30)*
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 172.25.35.133:8020, Ident: (token for hr1: 
HDFS_DELEGATION_TOKEN owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090263, maxDate=1553107890263, sequenceNumber=443, 
masterKeyId=93)
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.security.token.Token: Cannot find class for token kind 
HIVE_DELEGATION_TOKEN
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HIVE_DELEGATION_TOKEN, Service: hiveserver2ClientToken, Ident: 00 03 68 72 31 
04 68 69 76 65 05 6f 6f 7a 69 65 8a 01 69 78 65 2b a8 8a 01 69 9c 71 af a8 03 
8f 84
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
RM_DELEGATION_TOKEN, Service: 172.25.35.133:8050, Ident: (RM_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090238, maxDate=1553107890238, sequenceNumber=21, 
masterKeyId=139)
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
MR_DELEGATION_TOKEN, Service: 172.25.35.133:10020, Ident: (MR_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090488, maxDate=1553107890488, sequenceNumber=5, 
masterKeyId=107)
2019-03-13 18:51:33,144 

[jira] [Commented] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795315#comment-16795315
 ] 

Xiaoyu Yao commented on HADOOP-16199:
-

As can be seen below, a kms-dt with service field *Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]*
 can't be selected for LoadBalancingKMSClientProvider because it does not match 
its *canonical service: 172.25.36.130:9292*. Subsequent matching with 
individual KMSClientProvider also failed in this case.  The proposed fix it to 
allow LoadBalancingKMSClientProvider#selectDelegationToken to match not only 
the canonical service but also the delegation token service.

Below is detailed failure log for reference:

{code}
2019-03-13 18:51:33,056 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getServerDefaults took 5ms
2019-03-13 18:51:33,086 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node3.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node3].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.36.130:9292.
2019-03-13 18:51:33,087 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: KMSClientProvider created 
for KMS url: [http://c316-node4.raghav.com:9292/kms/v1/] delegation token 
service: 
[kms://http@c316-node4].[raghav.com:9292/kms|http://raghav.com:9292/kms]canonical
 service: 172.25.38.4:9292.
2019-03-13 18:51:33,089 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider: Created 
LoadBalancingKMSClientProvider for KMS url: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms]
 with 2 providers. delegation token service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 canonical service: 172.25.36.130:9292
...
2019-03-13 18:51:33,112 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: Current UGI: hr1 
(auth:SIMPLE)
2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: Localizer, 
Service: , Ident: 
(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.security.LocalizerTokenIdentifier@54604a95)
*2019-03-13 18:51:33,141 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: kms-dt, 
Service: 
kms://[h...@c316-node3.raghav.com|mailto:h...@c316-node3.raghav.com];[c316-node4.raghav.com:9292/kms|http://c316-node4.raghav.com:9292/kms],
 Ident: (kms-dt owner=hr1, renewer=yarn, realUser=oozie, 
issueDate=1552503090542, maxDate=1553107890542, sequenceNumber=27, 
masterKeyId=30)*
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 172.25.35.133:8020, Ident: (token for hr1: 
HDFS_DELEGATION_TOKEN owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090263, maxDate=1553107890263, sequenceNumber=443, 
masterKeyId=93)
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.security.token.Token: Cannot find class for token kind 
HIVE_DELEGATION_TOKEN
2019-03-13 18:51:33,142 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
HIVE_DELEGATION_TOKEN, Service: hiveserver2ClientToken, Ident: 00 03 68 72 31 
04 68 69 76 65 05 6f 6f 7a 69 65 8a 01 69 78 65 2b a8 8a 01 69 9c 71 af a8 03 
8f 84
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
RM_DELEGATION_TOKEN, Service: 172.25.35.133:8050, Ident: (RM_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090238, maxDate=1553107890238, sequenceNumber=21, 
masterKeyId=139)
2019-03-13 18:51:33,143 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: +token:Kind: 
MR_DELEGATION_TOKEN, Service: 172.25.35.133:10020, Ident: (MR_DELEGATION_TOKEN 
owner=hr1, renewer=yarn, 
realUser=oozie/c316-node1.[raghav@raghav.com|mailto:raghav@raghav.com], 
issueDate=1552503090488, maxDate=1553107890488, sequenceNumber=5, 
masterKeyId=107)
2019-03-13 18:51:33,144 DEBUG [ContainerLocalizer Downloader] 
org.apache.hadoop.crypto.key.kms.KMSClientProvider: Login UGI: hr1 (auth:SIMPLE)
2019-03-13 18:51:33,144 DEBUG [ContainerLocalizer Downloader] 

[jira] [Created] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-16199:
---

 Summary: KMSLoadBlanceClientProvider does not select token 
correctly
 Key: HADOOP-16199
 URL: https://issues.apache.org/jira/browse/HADOOP-16199
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


After HADOOP-14445 and HADOOP-15997, there are still cases where 
KMSLoadBlanceClientProvider does not select token correctly. 

Here is the use case:

The new configuration key hadoop.security.kms.client.token.use.uri.format=true 
is set cross all the cluster, including both Submitter and Yarn RM(renewer), 
which is not covered in the test matrix in this [HADOOP-14445 
comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].

I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
[~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16172:

Status: Patch Available  (was: Open)

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-16172:

Attachment: HADOOP-16172-docker-hadoop-3.01.patch

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-16172:
---

 Summary: Update apache/hadoop:3 to 3.2.0 release
 Key: HADOOP-16172
 URL: https://issues.apache.org/jira/browse/HADOOP-16172
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-02-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767455#comment-16767455
 ] 

Xiaoyu Yao commented on HADOOP-15686:
-

[~jojochuang], thanks for the pointer on the performance issue with 
jul_to_slf4j.

However, in patch v2, we only disable jul for 
com.sun.jersey.server.wadl.generators class. This will be different from 
previous patch where all jul is redirected. We may still get JUL from other 
jersey class?

Have you consider installing LevelChangePropagator along with jul_to_slf4j 
approach (before HADOO-13597) to eliminate the 60x overhead as mentioned in the 
same slf4j doc?

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>   at 
> 

[jira] [Comment Edited] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-02-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767455#comment-16767455
 ] 

Xiaoyu Yao edited comment on HADOOP-15686 at 2/13/19 6:45 PM:
--

[~jojochuang], thanks for the pointer on the performance issue with 
jul_to_slf4j.

However, in patch v2, we only disable jul for 
com.sun.jersey.server.wadl.generators class. This will be different from 
previous patch where all jul is redirected. We may still get JUL from other 
jersey class?

Have you consider installing LevelChangePropagator along with jul_to_slf4j 
approach (before HADOOP-13597) to eliminate the 60x overhead as mentioned in 
the same slf4j doc?


was (Author: xyao):
[~jojochuang], thanks for the pointer on the performance issue with 
jul_to_slf4j.

However, in patch v2, we only disable jul for 
com.sun.jersey.server.wadl.generators class. This will be different from 
previous patch where all jul is redirected. We may still get JUL from other 
jersey class?

Have you consider installing LevelChangePropagator along with jul_to_slf4j 
approach (before HADOO-13597) to eliminate the 60x overhead as mentioned in the 
same slf4j doc?

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> 

[jira] [Comment Edited] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-02-01 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758001#comment-16758001
 ] 

Xiaoyu Yao edited comment on HADOOP-15686 at 2/1/19 7:16 PM:
-

Thanks [~jojochuang] for working on this. The patch LGTM, +1. I just have two 
minor comments:

 

KMSWebApp.java

Line 84: Can we add a javadoc for this function?
{code:java}
 
/**

*Maps jersey's java.util.logging to slf4j

*/{code}
Line 85: NIT: can we remove "Optionally" as we are remove all here 
unconditionally?

Also, we can remove "Inspired by ATLAS-16" because this code was there in 
KMSWebApp.java but was removed by HADOOP-13597 by accident. Have you consider 
declaring it in a static block like before to ensure it only execute once?  
{code:java}
static {
  SLF4JBridgeHandler.removeHandlersForRootLogger();
  SLF4JBridgeHandler.install();
}

{code}


was (Author: xyao):
Thanks [~jojochuang] for working on this. The patch LGTM, +1. I just have two 
minor comments:

 

KMSWebApp.java

Line 84: Can we add a javadoc for this function?
{code:java}
 
/**

*Maps jersey's java.util.logging to slf4j

*/{code}
Line 85: NIT: can we remove "Optionally" as we are remove all here 
unconditionally?

 

Also, we can remove "Inspired by ATLAS-16" because this code was there in 
KMSWebApp.java but was removed by HADOOP-13597 by accident. Have you consider 
declaring it in a static block like before to ensure it only execute once?  
{code:java}
static {
  SLF4JBridgeHandler.removeHandlersForRootLogger();
  SLF4JBridgeHandler.install();
}

{code}
 

 

 

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> 

[jira] [Comment Edited] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr

2019-02-01 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758001#comment-16758001
 ] 

Xiaoyu Yao edited comment on HADOOP-15686 at 2/1/19 7:16 PM:
-

Thanks [~jojochuang] for working on this. The patch LGTM, +1. I just have two 
minor comments:

 

KMSWebApp.java

Line 84: Can we add a javadoc for this function?
{code:java}
 
/**

*Maps jersey's java.util.logging to slf4j

*/{code}
Line 85: NIT: can we remove "Optionally" as we are remove all here 
unconditionally?

 

Also, we can remove "Inspired by ATLAS-16" because this code was there in 
KMSWebApp.java but was removed by HADOOP-13597 by accident. Have you consider 
declaring it in a static block like before to ensure it only execute once?  
{code:java}
static {
  SLF4JBridgeHandler.removeHandlersForRootLogger();
  SLF4JBridgeHandler.install();
}

{code}
 

 

 


was (Author: xyao):
Thanks [~jojochuang] for working on this. The patch LGTM, +1. I just have two 
minor comments:

 

KMSWebApp.java

Line 84: Can we add a javadoc for this function?
{code:java}
 
/**

*Maps jersey's java.util.logging to slf4j

*/{code}
Line 85: NIT: can we remove "Optionally" as we are remove all here 
unconditionally?

 

Also, we can remove "Inspired by ATLAS-16" because this code was there in 
KMSWebApp.java but was removed by HADOOP-13597 by accident. 

Have you consider declaring it in a static block like before to ensure it only 
execute once.   

{code}

static {
  SLF4JBridgeHandler.removeHandlersForRootLogger();
  SLF4JBridgeHandler.install();
}

{code}

 

 

 

> Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
> -
>
> Key: HADOOP-15686
> URL: https://issues.apache.org/jira/browse/HADOOP-15686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15686.001.patch
>
>
> After we switched underlying system of KMS from Tomcat to Jetty, we started 
> to observe a lot of bogus messages like the follow [1]. It is harmless but 
> very annoying. Let's suppress it in log4j configuration.
> [1]
> {quote}
> Aug 20, 2018 11:26:17 AM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> buildModelAndSchemas
> SEVERE: Failed to generate the schema for the JAX-B elements
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
> IllegalAnnotationExceptions
> java.util.Map is an interface, and JAXB can't handle interfaces.
>   this problem is related to the following location:
>   at java.util.Map
> java.util.Map does not have a no-arg default constructor.
>   this problem is related to the following location:
>   at java.util.Map
>   at 
> com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
>   at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
>   at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
>   at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169)
>   at 
> com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405)
>   at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138)
>   at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>

  1   2   3   4   5   6   7   >