Hi Hokie! Are the kerberos tickets you're getting renewable?
-Sean On Tue, Feb 25, 2014 at 4:35 PM, Hyokwon Lee <[email protected]> wrote: > I am currently running into an issue and was hoping someone may have some > insight to the problem. > > Running Accumulo 1.4.3 on top of a Kerberos enabled Hadoop. I followed the > following instructions in the README: > > "If you are running on top of hdfs with kerberos enabled, then you need to do > some extra work. First, create an Accumulo principal > > kadmin.local -q "addprinc -randkey accumulo/<host.domain.name>" > > where <host.domain.name> is replaced by a fully qualified domain name. Export > the principals to a keytab file. It is safer to create a unique keytab file > for each > server, but you can also glob them if you wish. > > kadmin.local -q "xst -k accumulo.keytab -glob accumulo*" > > Place this file in $ACCUMULO_HOME/conf for every host. It should be owned by > the accumulo user and chmodded to 400. Add the following to the > accumulo-env.sh > > In the accumulo-site.xml file on each node, add settings for > general.kerberos.keytab > and general.kerberos.principal, where the keytab setting is the absolute path > to the keytab file ($ACCUMULO_HOME is valid to use) and principal is set to > accumulo/_HOST@<REALM>, where REALM is set to your kerberos realm. You may use > _HOST in lieu of your individual host names. > > <property> > <name>general.kerberos.keytab</name> > <value>$ACCUMULO_HOME/conf/accumulo.keytab</value> > </property> > > <property> > <name>general.kerberos.principal</name> > <value>accumulo/_HOST@MYREALM</value> > </property> > > You can then start up Accumulo as you would with the accumulo user, and it > will > automatically handle the kerberos keys needed to access hdfs. > > Please Note: You may have issues initializing Accumulo while running kerberos > HDFS. > You can resolve this by temporarily granting the accumulo user write access > to the > hdfs root directory, running init, and then revoking write permission in the > root > directory (be sure to maintain access to the /accumulo directory)." > > > After doing so, got accumulo to come up and initially it states on start up > that i authenticated using accumulo/[email protected]. For the > next 24 hour it is happy and everything works. However after the 24 hour > marker which is when the kerberos ticket expires, I start seeing the > following errors on all TServers: > > > [securty.UserGroupInformation] ERROR: PrivilegedActionException > as:accumulo/[email protected] (auth:KERBEROS) > cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > > [ipc.Client] WARN : Exception encountered while connecting to the server : > javax.security.sasl.SasleEception: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > > [securty.UserGroupInformation] ERROR: PrivilegedActionException > as:accumulo/[email protected] (auth:KERBEROS) > cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > > > And as far as I can tell this just retries and keeps failing. I checked the > accumulo.keytab file and it is a glob so it has the entries for every server > that Accumulo is on. Also if I manually do a kinit -kt accumulo.keytab > accumulo/[email protected] it works find and I get a valid > ticket. I also made sure everything in hdfs under "/accumulo" is owned by > accumulo so that doesn't seem to be the problem. Also made sure after > kiniting I can access the directory path and all sub directories. > > > So far the only thing that seems to fix my issue is if I bounce all accumulo > services and it is happy again. Also until I bounce the accumulo services, I > get error logs stating it cannot scan any of the tables (unable to scan > metadata, root_tablet, default_tablet, etc.) Has anyone else seen this > issue? Did I miss a configuration somewhere possibly? > > > Thanks, > > > Hokie > > >
