[
https://issues.apache.org/jira/browse/HADOOP-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16452123#comment-16452123
]
Pablo San José commented on HADOOP-15412:
-----------------------------------------
Yes, I want to implement KMS-HA using HDFS for storing keystore.
As you said, this solution may be confronting the separation of duty design
principle. However, If I understand how the KMS works correctly, an HDFS admin
could access the keystore but, because the key provider is encrypted by the KMS
and only KMS could decrypt the contents of it, the admin wouldn't be able to
decrypt anything in the cluster.
The problem I am facing trying to configure KMS in HA is that the KMS doesn't
manage the replication of the data in the keystore. So, for example, if two
instances of KMS are deployed, the client could be configured so if a request
to a KMS instance fails, clients retry with the next instance, but the data of
the two KMS keystore would be different if you use a local filesystem. The only
solution I could think is using a shared filesystem for the KMS instances,
which may be fine enough, but if the HA algorithm is something like round
robin, there could be locking problems in the concurrency trying if the
instances try to access the keystore at the same time.
As you said, KMS HA is not an easy task at all.
Thank you very much for your comments and your help.
> Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
> --------------------------------------------------------------
>
> Key: HADOOP-15412
> URL: https://issues.apache.org/jira/browse/HADOOP-15412
> Project: Hadoop Common
> Issue Type: Bug
> Components: kms
> Affects Versions: 2.7.2, 2.9.0
> Environment: RHEL 7.3
> Hadoop 2.7.2 and 2.7.9
>
> Reporter: Pablo San José
> Priority: Major
>
> I have been trying to configure the Hadoop kms to use hdfs as the key
> provider but it seems that this functionality is failing.
> I followed the Hadoop docs for that matter, and I added the following field
> to my kms-site.xml:
> {code:java}
> <property>
> <name>hadoop.kms.key.provider.uri</name>
> <value>jceks://[email protected]/kms/test.jceks</value>
> <description>
> URI of the backing KeyProvider for the KMS.
> </description>
> </property>{code}
> That route exists in hdfs, and I expect the kms to create the file test.jceks
> for its keystore. However, the kms failed to start due to this error:
> {code:java}
> ERROR: Hadoop KMS could not be started REASON:
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme
> "hdfs" Stacktrace: ---------------------------------------------------
> org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme
> "hdfs" at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132)
> at
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88)
> at
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
> at
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
> at
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
> at
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
> at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
> at
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
> at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
> at
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
> at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
> at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
>
> For what I could manage to understand, it seems that this error is because
> there is no FileSystem implemented for HDFS. I have looked up this error but
> it always refers to a lack of jars for the hdfs-client when upgrading, which
> I have not done (it is a fresh installation). I have tested it using Hadoop
> 2.7.2 and 2.9.0
> Thank you in advance.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]