You havee to setup the tez.queue.name property too and the kerberos ticket
if you need it

--
Laurent HATIER - Consultant Big Data & Business Intelligence chez CapGemini
fr.linkedin.com/pub/laurent-hatier/25/36b/a86/
<http://fr.linkedin.com/pub/laurent-h/25/36b/a86/>

2015-08-05 4:24 GMT+02:00 Reddy, Deepak <[email protected]>:

> Hi Mona,
>
> Thanks for the reply but I am still facing the same issue.  Do you know if
> there is some other setting that I missed.
>
> I am using HDP 2.2.6.0 for running different components.
>
> My WF looks like this
>
> <workflow-app xmlns="uri:oozie:workflow:0.2" name="hive-wf">
>     <start to="hive-node"/>
>
>     <action name="hive-node">
>         <hive xmlns="uri:oozie:hive-action:0.2">
>             <job-tracker>${jobTracker}</job-tracker>
>             <name-node>${nameNode}</name-node>
>             <prepare>
>                 <delete
> path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/hive"/>
>                 <mkdir
> path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data"/>
>             </prepare>
>             <job-xml>hive-site.xml</job-xml>
>             <configuration>
>                 <property>
>                     <name>mapred.job.queue.name</name>
>                     <value>${queueName}</value>
>                 </property>
>                 <property>
>                    <name>tez.lib.uris</name>
>
>  <value>${nameNode}/hdp/apps/2.2.6.0-2800/tez/tez.tar.gz</value>
>                 </property>
>             </configuration>
>             <script>script.q</script>
>
> <param>INPUT=${nameNode}/user/${wf:user()}/${examplesRoot}/input-data/text</param>
>
> <param>OUTPUT=${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/hive</param>
>         </hive>
>         <ok to="end"/>
>         <error to="fail"/>
>     </action>
>
> Thanks,
> Deepak
>
> -----Original Message-----
> From: Mona Chitnis [mailto:[email protected]]
> Sent: Tuesday, August 04, 2015 4:12 PM
> To: [email protected]
> Subject: Re: Hive Tez not working within oozie
>
> You can modify the mapred-site.xml, which you use to configure your Oozie
> server, to change “mapreduce.framework.name” property from its default
> value of “yarn” to “yarn-tez”
> Mona
>
>
>
>      On Tuesday, August 4, 2015 1:19 PM, "Reddy, Deepak" <
> [email protected]> wrote:
>
>
>  Hi,
>
> I am currently running an hive action within oozie. When I enable TEZ as
> the execution engine I am facing the following error within MR job
>
> Can someone please let me know how I can enable TEZ within oozie hive
> action.
>
> Thanks,
> Deepak
>
> 2015-08-04 13:03:30,709 WARN [IPC Server handler 0 on 38734]
> app.DAGAppMaster: Error getting SHA from local file for resource { scheme:
> "hdfs" host: "ussc-dev-lnhd01.nb" port: 8020 file:
> "/tmp/hive/oozie/_tez_session_dir/ea55a97a-57e2-4c3b-8a8a-87b5e2c83a4b/lib"
> } size: 0 timestamp: 1438718526028 type: FILE visibility: PRIVATE
> java.io.FileNotFoundException:
> /data3/hadoop/yarn/local/usercache/oozie/appcache/application_1438670992971_0029/container_e12_1438670992971_0029_01_000001/lib
> (Is a directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202)
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:141)
>         at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>         at
> org.apache.tez.dag.utils.RelocalizationUtils.getLocalSha(RelocalizationUtils.java:74)
>         at
> org.apache.tez.dag.app.DAGAppMaster$1.run(DAGAppMaster.java:1145)
>         at
> org.apache.tez.dag.app.DAGAppMaster$1.run(DAGAppMaster.java:1136)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at
> org.apache.tez.dag.app.DAGAppMaster.isSameFile(DAGAppMaster.java:1136)
>         at
> org.apache.tez.dag.app.DAGAppMaster.getAdditionalLocalResourceDiff(DAGAppMaster.java:1120)
>         at
> org.apache.tez.dag.app.DAGAppMaster.startDAG(DAGAppMaster.java:1936)
>         at
> org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1100)
>         at
> org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:113)
>         at
> org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:162)
>         at
> org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7381)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-08-04 13:03:30,728 INFO [IPC Server handler 0 on 38734] ipc.Server:
> IPC Server handler 0 on 38734, call
> org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPB.submitDAG
> from 10.154.16.124:60194 Call#2610 Retry#0
> org.apache.tez.dag.api.TezException: java.io.FileNotFoundException: Path
> is not a file:
> /tmp/hive/oozie/_tez_session_dir/ea55a97a-57e2-4c3b-8a8a-87b5e2c83a4b/lib
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:72)
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1903)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1844)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1824)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1796)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:554)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>         at
> org.apache.tez.dag.app.DAGAppMaster.isSameFile(DAGAppMaster.java:1164)
>         at
> org.apache.tez.dag.app.DAGAppMaster.getAdditionalLocalResourceDiff(DAGAppMaster.java:1120)
>         at
> org.apache.tez.dag.app.DAGAppMaster.startDAG(DAGAppMaster.java:1936)
>         at
> org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1100)
>         at
> org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:113)
>         at
> org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:162)
>         at
> org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7381)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> Caused by: java.io.FileNotFoundException: Path is not a file:
> /tmp/hive/oozie/_tez_session_dir/ea55a97a-57e2-4c3b-8a8a-87b5e2c83a4b/lib
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:72)
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1903)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1844)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1824)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1796)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:554)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1222)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:271)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:238)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:231)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1498)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>         at
> org.apache.tez.dag.utils.RelocalizationUtils.getResourceSha(RelocalizationUtils.java:86)
>         at
> org.apache.tez.dag.app.DAGAppMaster$1.run(DAGAppMaster.java:1153)
>         at
> org.apache.tez.dag.app.DAGAppMaster$1.run(DAGAppMaster.java:1136)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at
> org.apache.tez.dag.app.DAGAppMaster.isSameFile(DAGAppMaster.java:1136)
>         ... 14 more
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path
> is not a file:
> /tmp/hive/oozie/_tez_session_dir/ea55a97a-57e2-4c3b-8a8a-87b5e2c83a4b/lib
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:72)
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1903)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1844)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1824)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1796)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:554)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1469)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1400)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)
>
>
>
>

Reply via email to