Thanks guys. It is very helpful.

The problem is resolved by changing keyring cache to file cache. Cheers!

On Wed, May 9, 2018 at 8:15 AM, Robert Levas <[email protected]> wrote:

> Lian…
>
>
>
> It appears you have a few issues here – neither are related to the
> Ambari-generated auth-to-local rule.
>
>
>
> 1) The realm name needs to be in all uppercase characters.  So
> test_kdc.com is incorrect.  It needs to be TEST_KDC.COM.  If the KDC is
> configured to use the lowercase version of this, then it needs to be
> changed to use the uppercase version.  I know that the case of the realm
> name is theoretically uppercase by convention, but the underlying Kerberos
> libraries expect that the realm is all uppercase characters and issues are
> seen if this is not the case.
>
>
>
> 2) The Kerberos ticket cache needs to be a file rather than a keyring.
> This is a Hadoop limitation in that is does not know how to access cached
> tickets in a keyring.  I am not sure of the details, but I do know that you
> need to make sure the ticket cache is a file.   This is typically the
> default for the MIT Kerberos library; however, it can be set in the
> krb5.conf file under the [libdefaults] section using
>
>
>
> default_ccache_name = /tmp/krb5cc_%{uid}
>
>
>
> or more explicitly
>
>
>
> default_ccache_name = FILE:/tmp/krb5cc_%{uid}
>
>
>
> After fixing these issues, you should have better luck with a cluster
> where Kerberos is enabled.
>
>
>
> Related to this, if you wish to test out the auth-to-local rules on a host
> where Hadoop is set up (NameNode, DataNode, etc..), you can execute the
> following command:
>
>
>
> hadoop org.apache.hadoop.security.HadoopKerberosName <principal name>
>
>
>
> For example:
>
>
>
> hadoop org.apache.hadoop.security.HadoopKerberosName
> hdfs-spark_cluster@TEST_KDC.COM
>
> Name: [email protected] to hdfs
>
>
>
> For more information on auth-to-local rules, see my article on the
> Hortonworks community site - https://community.hortonworks.
> com/articles/14463/auth-to-local-rules-syntax.html.
>
>
>
> I hope this helps…
>
> Rob
>
>
>
>
>
> *From: *Lian Jiang <[email protected]>
> *Reply-To: *"[email protected]" <[email protected]>
> *Date: *Monday, May 7, 2018 at 7:14 PM
> *To: *"[email protected]" <[email protected]>
> *Subject: *make ambari create kerberos users in custom format
>
>
>
> Hi,
>
> I am using HDP2.6 and have enabled kerberos. The rules generated by ambari
> has:
>
> RULE:[1:$1@$0](hdfs-spark_cluster@test_kdc.com)s/.*/hdfs/
>
> Also, klist shows hdfs user is mapped correctly to the rule:
>
> [hdfs@test-namenode ~]$ klist
> Ticket cache: KEYRING:persistent:1012:1012
> Default principal: hdfs-spark_cluster@test_kdc.com
>
> User hdfs-spark_cluster is associated with hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs-spark_cluster
> Using existing cache: persistent:1012:1012
> Using principal: hdfs-spark_cluster@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> Authenticated to Kerberos v5
>
> However, hdfs is NOT associated with this hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs
> Using new cache: persistent:1012:krb_ccache_V36KQXp
> Using principal: hdfs@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> kinit: Keytab contains no suitable keys for hdfs@test_kdc.com while
> getting initial credentials
>
> As you can see, kinit maps hdfs to hdfs@test_kdc.com instead of
> hdfs-spark_cluster@test_kdc.com.
>
> I guess this is the reason I got "Failed to find any Kerberos tgt" when
> doing "hdfs dfs -ls".
>
> I don't know why ambari create kerberos users in the format of
> "hdfs-{CLUSTERNAME}@{REALNAME}" instead of "hdfs@{REALNAME}".
>
>
>
> Should I follow https://community.hortonworks.com/articles/79574/build-a-
> cluster-with-custom-principal-names-using.html to force ambari to create
> hdfs@test_kdc.com instead of hdfs-spark_cluster@test_kdc.com? Or I am
> missing anything else?
>
> Thanks for any help.
>
>
>

Reply via email to