[
https://issues.apache.org/jira/browse/STORM-346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14074748#comment-14074748
]
ASF GitHub Bot commented on STORM-346:
--------------------------------------
Github user revans2 commented on a diff in the pull request:
https://github.com/apache/incubator-storm/pull/190#discussion_r15417222
--- Diff:
storm-core/src/jvm/backtype/storm/security/auth/kerberos/AutoHDFS.java ---
@@ -0,0 +1,254 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package backtype.storm.security.auth.kerberos;
+
+import backtype.storm.Config;
+import backtype.storm.security.INimbusCredentialPlugin;
+import backtype.storm.security.auth.IAutoCredentials;
+import backtype.storm.security.auth.ICredentialsRenewer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import javax.xml.bind.DatatypeConverter;
+import java.io.*;
+import java.lang.reflect.Method;
+import java.net.URI;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Automatically get HDFS delegation tokens and push it to user's
topology. The class
+ * assumes that HDFS configuration files are in your class path.
+ */
+public class AutoHDFS implements IAutoCredentials, ICredentialsRenewer,
INimbusCredentialPlugin {
+ private static final Logger LOG =
LoggerFactory.getLogger(AutoHDFS.class);
+ public static final String HDFS_CREDENTIALS = "HDFS_CREDENTIALS";
+
+ public void prepare(Map conf) {
+ LOG.debug("no op.");
+ }
+
+ @SuppressWarnings("unchecked")
+ private byte[] getHDFSCredsWithDelegationToken(Map conf) throws
Exception {
+
+ try {
+ /**
+ * What we want to do is following:
+ * if(UserGroupInformation.isSecurityEnabled) {
+ * FileSystem fs = FileSystem.get(nameNodeURI,
configuration, topologySubmitterUser);
+ * UserGroupInformation ugi =
UserGroupInformation.getCurrentUser();
+ * UserGroupInformation proxyUser =
UserGroupInformation.createProxyUser(topologySubmitterUser, ugi);
+ * Credentials credential= proxyUser.getCredentials();
+ * fs.addDelegationToken(hdfsUser, credential);
+ * }
+ * and then return the credential object as a bytearray.
+ *
+ * Following are the minimum set of configuration that needs
to be set, users should have hdfs-site.xml
+ * and core-site.xml in the class path which should set these
configuration.
+ * configuration.set("hadoop.security.authentication",
"KERBEROS");
+ * configuration.set("dfs.namenode.kerberos.principal",
+ *
"hdfs/[email protected]");
+ *
configuration.set("hadoop.security.kerberos.ticket.cache.path",
"/tmp/krb5cc_1002");
+ * anf the ticket cache must have the hdfs user's creds.
+ */
+ Class configurationClass =
Class.forName("org.apache.hadoop.conf.Configuration");
+ Object configuration = configurationClass.newInstance();
+
+ //UserGroupInformation.isSecurityEnabled
+ final Class ugiClass =
Class.forName("org.apache.hadoop.security.UserGroupInformation");
+ final Method isSecurityEnabledMethod =
ugiClass.getDeclaredMethod("isSecurityEnabled");
+ boolean isSecurityEnabled =
(Boolean)isSecurityEnabledMethod.invoke(null);
+
+ if(isSecurityEnabled) {
+ final String topologySubmitterUser = (String)
conf.get(Config.TOPOLOGY_SUBMITTER_USER);
+ final String hdfsUser = (String)
conf.get(Config.HDFS_PRINCIPAL);
+
+ //FileSystem fs = FileSystem.get(nameNodeURI,
configuration, topologySubmitterUser);
+ Class fileSystemClass =
Class.forName("org.apache.hadoop.fs.FileSystem");
+ Object defaultNameNodeURI =
fileSystemClass.getMethod("getDefaultUri", configurationClass).invoke(null,
configuration);
--- End diff --
Actually could we just pull that URI out from the topology conf. That way
they have a flag to turn this on or off?
> (Security) Oozie style delegation tokens for HDFS/HBase
> -------------------------------------------------------
>
> Key: STORM-346
> URL: https://issues.apache.org/jira/browse/STORM-346
> Project: Apache Storm (Incubating)
> Issue Type: Bug
> Reporter: Robert Joseph Evans
> Assignee: Parth Brahmbhatt
> Labels: security
>
> Oozie has the ability to fetch delegation tokens on behalf of other users by
> running as a super user that can become a proxy user for almost anyone else.
> We should build one or more classes similar to AutoTGT that can fetch a
> delegation token for HDFS/HBase, renew the token if needed, and then once the
> token is about to permanently expire fetch a new one.
> According to some people I have talked with HBase may need to have a JIRA
> filed against it so that it can pick up a new delegation token without
> needing to restart the process.
--
This message was sent by Atlassian JIRA
(v6.2#6252)