[
https://issues.apache.org/jira/browse/OOZIE-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217808#comment-16217808
]
Peter Bacsko commented on OOZIE-2871:
-------------------------------------
bq. backport HdfsCredentials, JHSCredentials, YarnRMCredentials,
HadoopTokenHelper and the delegation token creation logic in JavaActionExecutor
from master
I apologize if I misunderstand something here because I might not have the full
context, but to me there seems to be some sort of confusion here.
There's absolutely no need to do this. Oozie has a relatively new codebase
where we replaced the old LauncherMapper-based execution model and now Oozie
has its own Application Master. So when you execute an asynchronous action like
Java or Shell, it no longer runs as a MapReduce job. Instead, it's like a brand
new YARN application. Obviously we want to keep the semantics of the execution
as MR-like as possible, that's why we're putting significant effort to support
backward compatibility.
The reason why those classes like HdfsCredentials, JHSCredentials,
YarnRMCredentials exist is because we use the YARN API to submit an action. If
you use the standard MR JobClient class, it automatically acquires the
necessary delegation tokens for you. With a new YARN application, we don't have
this "luxury" anymore so we have to do this our own.
DistCp with the older LauncherMapper-based Oozie should just work fine,
provided that cross-realm authentication is configured properly. Based on what
I found on various websites, performing a distcp between secure-secure or
secure-insecure clusters can be tricky and you have to mess around with some
configuration values.
Feel free to correct me if I'm wrong.
> when Enable Kerberos, Oozie perform tasks throw “Client cannot authenticate
> via:[TOKEN, KERBEROS]”
> ---------------------------------------------------------------------------------------------------
>
> Key: OOZIE-2871
> URL: https://issues.apache.org/jira/browse/OOZIE-2871
> Project: Oozie
> Issue Type: Bug
> Components: security
> Affects Versions: 4.2.0
> Environment: Oozie version :4.2.0
> Hadoop version:2.7.2
> Both Oozie and Hadoop are enabled kerberos.
> Reporter: yangfang
> Priority: Critical
> Attachments: OOZIE-2871.patch, secure_multicluster_distcp_workflow.xml
>
>
> When Oozie and Hadoop both enabled kerberos, I submitted a mapreduce job
> to oozie,then I got the error as below:
> 2017-04-27 13:37:12,677 WARN MapReduceActionExecutor: 523 - SERVER[zdh143]
> USER[mr] GROUP[-] TOKEN[] APP[map-reduce-wf]
> JOB[0000008-170427133546167-oozie-mr-W]
> ACTION[0000008-170427133546167-oozie-mr-W@mr-node] Launcher exception: Failed
> on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate
> via:[TOKEN, KERBEROS]; Host Details : local host is: "zdh142/10.43.183.142";
> destination host is: "zdh143":9000;
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate
> via:[TOKEN, KERBEROS]; Host Details : local host is: "zdh142/10.43.183.142";
> destination host is: "zdh143":9000;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
> at
> org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:130)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:268)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1299)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1296)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> at
> org.apache.oozie.action.hadoop.MapReduceMain.submitJob(MapReduceMain.java:102)
> at
> org.apache.oozie.action.hadoop.MapReduceMain.run(MapReduceMain.java:64)
> at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
> at
> org.apache.oozie.action.hadoop.MapReduceMain.main(MapReduceMain.java:38)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:238)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate
> via:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
> at org.apache.hadoop.ipc.Client.call(Client.java:1451)
> ... 49 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> at
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:560)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:375)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:729)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:724)
> ... 52 more
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)