[ 
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16452056#comment-16452056
 ] 

Pablo San José commented on HADOOP-15412:
-----------------------------------------

Hi Wei-Chiu,

Thank you very much for your quick response. So I have misunderstood the 
documentation. I thought the KMS could use any of the providers present in the 
provider type section of the Credential Provider API docs: 
[https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Provider_Types]

I understood that the HDFS NameNode was only consulted when an access to an 
encryption zone was requested because the metadata stored in the NameNode only 
contains the EDEK for the files in an encryption zone. I thought that, because 
the key provider is already encrypted by the KMS, it could be in a 
non-encrypted zone of HDFS. 

This case was great to have the KMS in HA  because they could share the key 
provider and be configured very easily.

Thank you again for your help.

> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --------------------------------------------------------------
>
>                 Key: HADOOP-15412
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15412
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: kms
>    Affects Versions: 2.7.2, 2.9.0
>         Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>  
>            Reporter: Pablo San José
>            Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key 
> provider but it seems that this functionality is failing. 
> I followed the Hadoop docs for that matter, and I added the following field 
> to my kms-site.xml:
> {code:java}
> <property> 
>    <name>hadoop.kms.key.provider.uri</name>
>    <value>jceks://[email protected]/kms/test.jceks</value> 
>    <description> 
>       URI of the backing KeyProvider for the KMS. 
>    </description> 
> </property>{code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks 
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON: 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" Stacktrace: --------------------------------------------------- 
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
> "hdfs" at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88)
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
>  at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) 
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
>  at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>  
> For what I could manage to understand, it seems that this error is because 
> there is no FileSystem implemented for HDFS. I have looked up this error but 
> it always refers to a lack of jars for the hdfs-client when upgrading, which 
> I have not done (it is a fresh installation). I have tested it using Hadoop 
> 2.7.2 and 2.9.0
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to