[
https://issues.apache.org/jira/browse/TEZ-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16010193#comment-16010193
]
TezQA commented on TEZ-3727:
----------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12868018/TEZ-3727.patch
against master revision 6317725.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 3.0.1) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in .
Test results:
https://builds.apache.org/job/PreCommit-TEZ-Build/2441//testReport/
Console output: https://builds.apache.org/job/PreCommit-TEZ-Build/2441//console
This message is automatically generated.
> When using HDFS federation, token of tez.simple.history.logging.dir is not
> added, causing AM to fail
> ----------------------------------------------------------------------------------------------------
>
> Key: TEZ-3727
> URL: https://issues.apache.org/jira/browse/TEZ-3727
> Project: Apache Tez
> Issue Type: Bug
> Affects Versions: 0.8.5
> Environment: hive1.1.0 + tez0.8.5
> Reporter: Chen Xi
> Fix For: 0.8.5
>
> Attachments: TEZ-3727.patch
>
>
> If we use different fs for tez.simple.history.logging.dir and
> hive.exec.scratchdir, the tez AM throws such exception:
> {noformat}
> [INFO] [main] |retry.RetryInvocationHandler|: Exception while invoking
> getFileInfo of class ClientNamenodeProtocolTranslatorPB over
> ns/xx.xx.xx.xx:xxxx after 1 fail over attempts. Trying to fail over
> immediately.
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate
> via:[TOKEN, KERBEROS]; Host Details : local host is: "nm1/xx.xx.xx.xx";
> destination host is: "ns":xxxx;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
> at org.apache.hadoop.ipc.Client.call(Client.java:1472)
> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
> at
> org.apache.tez.dag.history.logging.impl.SimpleHistoryLoggingService.serviceInit(SimpleHistoryLoggingService.java:81)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at
> org.apache.tez.dag.history.HistoryEventHandler.serviceInit(HistoryEventHandler.java:100)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at
> org.apache.tez.dag.app.DAGAppMaster.initServices(DAGAppMaster.java:1933)
> at
> org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:622)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at org.apache.tez.dag.app.DAGAppMaster$8.run(DAGAppMaster.java:2586)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2583)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2388)
> Caused by: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate
> via:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:680)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
> at org.apache.hadoop.ipc.Client.call(Client.java:1438)
> ... 31 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> at
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
> at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
> ... 34 more
> {noformat}
> That's becasue the token of tez.simple.history.logging.dir is not added
> during tez AM initialization.
> In hive-site.xml we have:
> {noformat}
> <property>
> <name>hive.exec.scratchdir</name>
> <value>hdfs://ns/tmp/hive</value>
> </property>
> {noformat}
> In tez-site.xml we have:
> {noformat}
> <property>
> <name>tez.simple.history.logging.dir</name>
> <value>hdfs://ns2/history-tez</value>
> </property>
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)