I try to add some appmaster in appConfig.json to use keytab instead. It seem ok. Thanks. "slider-appmaster": { "jvm.heapsize": "1024M", "slider.hdfs.keytab.dir": ".slider/keytabs/client", "slider.am.login.keytab.name": "client.keytab", "slider.keytab.principal.name": "client/h...@mem.com" }
> Subject: Re: appmaster token error > From: jma...@hortonworks.com > To: dev@slider.incubator.apache.org > Date: Wed, 16 Dec 2015 15:05:21 +0000 > > It appears as if the flex operation execution flow may not include the > retrieval of an up to date HDFS delegation token. I’d go ahead and file a > JIRA in order to have someone take a look. > > > On Dec 16, 2015, at 1:14 AM, sunww <spe...@outlook.com> wrote: > > > > Hi > > I'm running docker container with hadoop2.7.1 and slider0.8. > > And I enable kerberos, after a few days I flex the application to add > > more docker containers。 > > But I found hdfs token error in appmaster log. Is I miss something in > > appmaster config? > > Any suggestion will be appreciated. Thanks. > > > > This is error log in appmaster: > > 2015-12-15 17:50:32,099 [RoleLaunchService-014] ERROR > > appmaster.RoleLaunchService - Exception thrown while trying to start > > sqlfire: > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > > token (HDFS_DELEGATION_TOKEN token 131 for client) can't be found in cache > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > > token (HDFS_DELEGATION_TOKEN token 131 for client) can't be found in cache > > at org.apache.hadoop.ipc.Client.call(Client.java:1468) > > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > > at > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > > at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) > > at > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) > > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) > > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > > at > > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > > at > > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > > at > > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > > at > > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400 > > >