[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453169#comment-16453169
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Filed HADOOP-15412 to get this documented. I'll close this Jira then.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453096#comment-16453096
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Yeah... that's a problem. Even if you use a shared file system (like NFS?) you 
still need to make sure the network communication is authentication and 
encrypted.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452123#comment-16452123
 ] 

Pablo San José commented on HADOOP-15412:
-

Yes, I want to implement KMS-HA using HDFS for storing keystore.

As you said, this solution may be confronting the separation of duty design 
principle. However, If I understand how the KMS works correctly, an HDFS admin 
could access the keystore but, because the key provider is encrypted by the KMS 
and only KMS could decrypt the contents of it, the admin wouldn't be able to 
decrypt anything in the cluster.

The problem I am facing trying to configure KMS in HA is that the KMS doesn't 
manage the replication of the data in the keystore. So, for example, if two 
instances of KMS are deployed, the client could be configured so if a request 
to a KMS instance fails, clients retry with the next instance, but the data of 
the two KMS keystore would be different if you use a local filesystem. The only 
solution I could think is using a shared filesystem for the KMS instances, 
which may be fine enough, but if the HA algorithm is something like round 
robin, there could be locking problems in the concurrency trying if the 
instances try to access the keystore at the same time.

As you said, KMS HA is not an easy task at all.

Thank you very much for your comments and your help.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452084#comment-16452084
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

If I understand it correctly, you wanted to implement KMS-HA using HDFS for 
storing keystore?

While I can conceive that as a simple & quick solution, it makes little sense 
to store keystore in an unencrypted HDFS cluster. It also violates the initial 
design principal – separation of duty. With the keystore in non-EZ, A hdfs 
admin can easily decrypt anything in the cluster, voiding the need of KMS.

 

KMS HA is not a trivial task. Please consult this doc for reference: 
https://hadoop.apache.org/docs/current/hadoop-kms/index.html#High_Availability

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452056#comment-16452056
 ] 

Pablo San José commented on HADOOP-15412:
-

Hi Wei-Chiu,

Thank you very much for your quick response. So I have misunderstood the 
documentation. I thought the KMS could use any of the providers present in the 
provider type section of the Credential Provider API docs: 
[https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Provider_Types]

I understood that the HDFS NameNode was only consulted when an access to an 
encryption zone was requested because the metadata stored in the NameNode only 
contains the EDEK for the files in an encryption zone. I thought that, because 
the key provider is already encrypted by the KMS, it could be in a 
non-encrypted zone of HDFS. 

This case was great to have the KMS in HA  because they could share the key 
provider and be configured very easily.

Thank you again for your help.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> 

[jira] [Commented] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452021#comment-16452021
 ] 

Wei-Chiu Chuang commented on HADOOP-15412:
--

Hi Pablo, thanks for filing the issue.

What you mention is not a valid use case. KMS can't use HDFS as the backing 
storage. As you could imagine, if HDFS is used for KMS, then each HDFS client 
file access would go through HDFS NameNode --> KMS --> HDFS NameNode --> KMS 


The file based KMS can use keystore files on the local file system. 

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>Reporter: Pablo San José
>Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
>  
>hadoop.kms.key.provider.uri
>jceks://h...@nn1.example.com/kms/test.jceks 
> 
>   URI of the backing KeyProvider for the KMS. 
> 
> {code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it