This issue has been documented in the twiki page:
* Use of hftp as the scheme for read only interface in cluster entity [[
https://issues.apache.org/jira/browse/HADOOP-10215][will not work in Oozie]]
The alternative is to use webhdfs scheme instead and its been tested
with DistCp.
Have you set up hadoop confs for both clusters in target Oozie? Are you
using hadoop-1 or hadoop-2/yarn?
Make sure all oozie servers that falcon talks to has the hadoop configs
configured in oozie-site.xml
<verbatim>
<property>
<name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
<value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
<description>
Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the
HOST:PORT of
the Hadoop service (JobTracker, HDFS). The wildcard '*'
configuration is
used when there is no exact match for an authority. The
HADOOP_CONF_DIR contains
the relevant Hadoop *-site.xml files. If the path is relative is
looked within
the Oozie configuration directory; though the path can be
absolute (i.e. to point
to Hadoop client conf/ directories in the local filesystem.
</description>
</property>
</verbatim>
On Fri, Jul 11, 2014 at 9:37 PM, Venkat R <[email protected]>
wrote:
> I am able to run oozie jobs on both the clustes (primarycluster and
> backupCluster, both secured).
>
> I'm also able to access hdfs -ls command on primaryCluser from the
> backupCluster Oozie/Falcon machine.
>
> It's that replication job that kick off in backupCluster compute node that
> fails when trying to talk to the primaryCluster namenode.
>
>
> Both the Falcon cluster definition have the NN principal set.
> Both the primary and backup cluster core.xml has both the oozie/falcon
> machines are included in the proxy.oozie and proxy.falcon properties.
>
> I will try the command you mentioned shortly and reply.
>
>
>
> On Friday, July 11, 2014 8:53 AM, Arpit Gupta <[email protected]>
> wrote:
>
>
>
> Hmm we have been running this setup and it works for us. Are you able to
> run any other job through oozie (without falcon)? If so can you do the
> following.
>
> kinit as some user and make the following call using curl
>
> curl --negotiate -u : "
>
> http://eat1-nertznn01.grid.linkedin.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=veramach
> "
>
> See if this works. I am at a loss right now will have to see what we are
> doing in our configs.
>
>
> On Thu, Jul 10, 2014 at 6:11 PM, Venkat R <[email protected]>
> wrote:
>
> > Hi Arpit,
> >
> > The jersey-server and jersey-core jars were missing and I copied them to
> > WEB-INF and the coordinator is able to talk to source cluster name-node
> to
> > indentify the new dirs and kick off the workflow.
> >
> > But the workflow fails with similar exception as hftp (unable to get the
> > token) -- exception below:
> >
> > Thanks
> > Venkat
> >
> > Failing Oozie Launcher, Main class
> > [org.apache.falcon.latedata.LateDataHandler], main() threw exception,
> > Authentication failed, url=
> >
> http://eat1-nertznn01.grid.linkedin.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=veramach
> > java.io.IOException: Authentication failed, url=
> >
> http://eat1-nertznn01.grid.linkedin.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=veramach
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.init(WebHdfsFileSystem.java:490)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:531)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:953)
> > at
> >
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:227)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:381)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:402)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathRunner.getUrl(WebHdfsFileSystem.java:652)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.init(WebHdfsFileSystem.java:485)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:531)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:678)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:689)
> > at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> > at org.apache.hadoop.fs.Globber.glob(Globber.java:238)
> > at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1624)
> > at
> >
> org.apache.falcon.latedata.LateDataHandler.usage(LateDataHandler.java:269)
> > at
> >
> org.apache.falcon.latedata.LateDataHandler.getFileSystemUsageMetric(LateDataHandler.java:252)
> > at
> >
> org.apache.falcon.latedata.LateDataHandler.computeStorageMetric(LateDataHandler.java:224)
> > at
> >
> org.apache.falcon.latedata.LateDataHandler.computeMetrics(LateDataHandler.java:170)
> > at
> org.apache.falcon.latedata.LateDataHandler.run(LateDataHandler.java:147)
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > at
> org.apache.falcon.latedata.LateDataHandler.main(LateDataHandler.java:60)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:606)
> > at
> >
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
> > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> > Caused by:
> > org.apache.hadoop.security.authentication.client.AuthenticationException:
> > GSSException: No valid credentials provided (Mechanism level: Failed to
> > find any Kerberos tgt)
> > at
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> > at
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
> > at
> >
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> > at
> >
> org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:164)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.openHttpUrlConnection(WebHdfsFileSystem.java:475)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$200(WebHdfsFileSystem.java:431)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:457)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:454)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getHttpUrlConnection(WebHdfsFileSystem.java:453)
> > at
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.init(WebHdfsFileSystem.java:487)
> > ... 36 more
> > Caused by: GSSException: No valid credentials provided (Mechanism level:
> > Failed to find any Kerberos tgt)
> > at
> >
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> > at
> >
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> > at
> >
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> > at
> >
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
> > at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> > at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> > at
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
> > at
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261)
> > ... 48 more
> >
> >
> >
> > On Thursday, July 10, 2014 5:58 PM, Arpit Gupta <[email protected]>
> > wrote:
> >
> >
> >
> > This looks like the Oozie war file is missing sole jars that hadoop
> needs.
> > What version of hadoop are you running and how did you do the Oozie war
> > setup?
> >
> > On Thursday, July 10, 2014, Venkat R <[email protected]>
> wrote:
> >
> > > Switched to webhdfs, but the co-ordinator keeps failing with the
> > following
> > > exception and thinks the data on the other side is not present. I am
> > > running Apache version of Oozie (4.0.1).
> > > Any thoughts?
> > >
> > > Venkat
> > >
> > > ACTION[0000006-140710220847349-oozie-oozi-C@1] Error,
> > > java.lang.NoClassDefFoundError: Could not initialize class
> > > javax.ws.rs.core.MediaType
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:287)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:630)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:535)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:953)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:227)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:381)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:402)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathRunner.getUrl(WebHdfsFileSystem.java:652)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.init(WebHdfsFileSystem.java:485)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:531)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:678)
> > > at
> > >
> >
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:689)
> > > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1399)
> > > at
> org.apache.oozie.dependency.FSURIHandler.exists(FSURIHandler.java:100)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.pathExists(CoordActionInputCheckXCommand.java:484)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkListOfPaths(CoordActionInputCheckXCommand.java:455)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkResolvedUris(CoordActionInputCheckXCommand.java:425)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkInput(CoordActionInputCheckXCommand.java:255)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.execute(CoordActionInputCheckXCommand.java:130)
> > > at
> > >
> >
> org.apache.oozie.command.coord.CoordActionInputCheckXCommand.execute(CoordActionInputCheckXCommand.java:65)
> > > at org.apache.oozie.command.XCommand.call(XCommand.java:280)
> > > at
> > >
> >
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326)
> > > at
> > >
> >
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255)
> > > at
> > >
> >
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> > > at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > On Thursday, July 10, 2014 2:42 PM, Venkat R
> <[email protected]
> > >
> > > wrote:
> > >
> > >
> > >
> > > ok, will try now and see.
> > >
> > >
> > >
> > > On Thursday, July 10, 2014 2:37 PM, Arpit Gupta <[email protected]
> > > <javascript:;>> wrote:
> > >
> > >
> > >
> > > from the stack trace it looks like you are using hftp. We ran into
> issues
> > > when running tests against secure hadoop + hftp
> > >
> > > https://issues.apache.org/jira/browse/HDFS-5842
> > >
> > > I recommend switch the readonly interface to webhdfs.
> > >
> > > --
> > > Arpit Gupta
> > > Hortonworks Inc.
> > > http://hortonworks.com/
> > >
> > >
> > > On Jul 10, 2014, at 2:16 PM, Arpit Gupta <[email protected]
> > > <javascript:;>> wrote:
> > >
> > > > You need to provide the nn principal in the cluster.xml for each
> > > cluster. The following property needs to be provided in each cluster's
> > xml
> > > >
> > > > dfs.namenode.kerberos.principal
> > > > --
> > > > Arpit Gupta
> > > > Hortonworks Inc.
> > > > http://hortonworks.com/
> > > >
> > > > On Jul 10, 2014, at 2:08 PM, Venkat R <[email protected]
> > > <javascript:;>> wrote:
> > > >
> > > >> Using the demo example. There is a replication job that copies
> dataset
> > > from Source to Target cluster by launching a REPLICATION job on Target
> > > Oozie cluster. But it fails with the following GSSException:
> > > >>
> > > >> I have added both the oozie servers (one for source and target
> > > clusters) to the core-site.xml of both the clusters as proxyuser
> machines
> > > as below:
> > > >>
> > > >> source-cluster and target-cluster : core-site.xml has the following:
> > > >>
> > > >> <property>
> > > >> <name>hadoop.proxyuser.oozie.groups</name>
> > > >> <value>users</value>
> > > >> </property>
> > > >> <property>
> > > >> <name>hadoop.proxyuser.oozie.hosts</name>
> > > >> <value>eat1-hcl0758.grid.linkedin.com,
> > > eat1-hcl0759.grid.linkedin.com</value>
> > > >> </property>
> > > >>
> > > >> Appreciate any pointers.
> > > >> Venkat
> > > >>
> > > >> Failing Oozie Launcher, Main class
> > > [org.apache.falcon.latedata.LateDataHandler], main() threw exception,
> > > Unable to obtain remote token
> > > >> java.io.IOException: Unable to obtain remote token
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
> > > >> at
> > >
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:251)
> > > >> at
> > >
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:246)
> > > >> at java.security.AccessController.doPrivileged(Native Method)
> > > >> at javax.security.auth.Subject.doAs(Subject.java:415)
> > > >> at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:246)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:336)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:323)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:455)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:470)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:499)
> > > >> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> > > >> at org.apache.hadoop.fs.Globber.glob(Globber.java:238)
> > > >> at
> > org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1624)
> > > >> at
> > >
> >
> org.apache.falcon.latedata.LateDataHandler.usage(LateDataHandler.java:269)
> > > >> at
> > >
> >
> org.apache.falcon.latedata.LateDataHandler.getFileSystemUsageMetric(LateDataHandler.java:252)
> > > >> at
> > >
> >
> org.apache.falcon.latedata.LateDataHandler.computeStorageMetric(LateDataHandler.java:224)
> > > >> at
> > >
> >
> org.apache.falcon.latedata.LateDataHandler.computeMetrics(LateDataHandler.java:170)
> > > >> at
> > >
> org.apache.falcon.latedata.LateDataHandler.run(LateDataHandler.java:147)
> > > >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > > >> at
> > >
> org.apache.falcon.latedata.LateDataHandler.main(LateDataHandler.java:60)
> > > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> at
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >> at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >> at java.lang.reflect.Method.invoke(Method.java:606)
> > > >> at
> > >
> >
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
> > > >> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> > > >> at
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> > > >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> > > >> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
> > > >> at java.security.AccessController.doPrivileged(Native Method)
> > > >> at javax.security.auth.Subject.doAs(Subject.java:415)
> > > >> at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> > > >> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> > > >> Caused by:
> > >
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> > > GSSException: No valid credentials provided (Mechanism level: Failed to
> > > find any Kerberos tgt)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:164)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:371)
> > > >> at
> > >
> >
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
> > > >> ... 35 more
> > > >> Caused by: GSSException: No valid credentials provided (Mechanism
> > > level: Failed to find any Kerberos tgt)
> > > >> at
> > >
> >
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> > > >> at
> > >
> >
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> > > >> at
> > >
> >
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> > > >> at
> > >
> >
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
> > > >> at
> > >
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> > > >> at
> > >
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
> > > >> at java.security.AccessController.doPrivileged(Native Method)
> > > >> at javax.security.auth.Subject.doAs(Subject.java:415)
> > > >> at
> > >
> >
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261)
> > > >> ... 40 more
> > > >>
> > > >> Oozie Launcher failed, finishing Hadoop job gracefully
> > > >>
> > > >
> > >
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
--
Regards,
Venkatesh
“Perfection (in design) is achieved not when there is nothing more to add,
but rather when there is nothing more to take away.”
- Antoine de Saint-Exupéry