Yi Zhang created HIVE-16331:
-------------------------------
Summary: create orc table fails when hive.exec.scratchdir set to
viewfs path in auto merge jobs
Key: HIVE-16331
URL: https://issues.apache.org/jira/browse/HIVE-16331
Project: Hive
Issue Type: Bug
Components: Hive
Reporter: Yi Zhang
if hive.exec.sracthdir set to viewfs path, but fs.defaultFS not in viewfs, when
create ORC table in hive/tez, if auto merge job is kicked off, the auto merge
job fails with following error:
```
2017-03-29 23:10:57,892 INFO [main]: org.apache.hadoop.hive.ql.Driver:
Launching Job 3 out of 3
2017-03-29 23:10:57,894 INFO [main]: org.apache.hadoop.hive.ql.Driver: Starting
task [Stage-4:MAPRED] in serial mode
2017-03-29 23:10:57,894 INFO [main]:
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager: The current user:
yizhang, session user: yizhang
2017-03-29 23:10:57,894 INFO [main]:
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager: Current queue name is
hadoop-sync incoming queue name is hadoop-sync
2017-03-29 23:10:57,949 INFO [main]: hive.ql.Context: New scratch dir is
viewfs://ns-default/tmp/hive_scratchdir/yizhang/5da3d082-33b3-4194-97e2-005549d1b3c4/hive_2017-03-29_23-09-55_489_4030642791346631679-1
2017-03-29 23:10:57,949 DEBUG [main]:
org.apache.hadoop.hive.ql.exec.tez.DagUtils: TezDir path set
viewfs://ns-default/tmp/hive_scratchdir/yizhang/5da3d082-33b3-4194-97e2-005549d1b3c4/hive_2017-03-29_23-09-55_489_4030642791346631679-1/yizhang/_tez_scratch_dir
for user: yizhang
2017-03-29 23:10:57,950 DEBUG [main]: org.apache.hadoop.hdfs.DFSClient:
/tmp/hive_scratchdir/yizhang/5da3d082-33b3-4194-97e2-005549d1b3c4/hive_2017-03-29_23-09-55_489_4030642791346631679-1/yizhang/_tez_scratch_dir:
masked=rwxr-xr-x
2017-03-29 23:10:57,950 DEBUG [main]: org.apache.hadoop.ipc.Client: The ping
interval is 60000 ms.
2017-03-29 23:10:57,950 DEBUG [main]: org.apache.hadoop.ipc.Client: Connecting
to hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020
2017-03-29 23:10:57,951 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang: starting, having connections 3
2017-03-29 23:10:57,951 DEBUG [IPC Parameter Sending Thread #0]:
org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
sending #373
2017-03-29 23:10:57,954 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
got value #373
2017-03-29 23:10:57,955 DEBUG [main]: org.apache.hadoop.ipc.ProtobufRpcEngine:
Call: mkdirs took 5ms
2017-03-29 23:10:57,955 INFO [main]: org.apache.hadoop.hive.ql.exec.Task:
Session is already open
2017-03-29 23:10:57,955 DEBUG [IPC Parameter Sending Thread #0]:
org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
sending #374
2017-03-29 23:10:57,956 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
got value #374
2017-03-29 23:10:57,956 DEBUG [main]: org.apache.hadoop.ipc.ProtobufRpcEngine:
Call: getFileInfo took 1ms
2017-03-29 23:10:57,956 DEBUG [IPC Parameter Sending Thread #0]:
org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
sending #375
2017-03-29 23:10:57,961 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
got value #375
2017-03-29 23:10:57,961 DEBUG [main]: org.apache.hadoop.ipc.ProtobufRpcEngine:
Call: getFileInfo took 5ms
2017-03-29 23:10:57,962 DEBUG [IPC Parameter Sending Thread #0]:
org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
sending #376
2017-03-29 23:10:57,962 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
got value #376
2017-03-29 23:10:57,962 DEBUG [main]: org.apache.hadoop.ipc.ProtobufRpcEngine:
Call: getFileInfo took 1ms
2017-03-29 23:10:57,963 INFO [main]:
org.apache.hadoop.hive.ql.exec.tez.DagUtils: Resource modification time:
1490828996812
2017-03-29 23:10:57,963 DEBUG [main]: org.apache.hadoop.hive.ql.exec.Task:
Adding local resource: scheme: "viewfs" host: "ns-default" port: -1 file:
"/tmp/hive_scratchdir/yizhang/_tez_session_dir/c753ae3e-9051-436f-b2b9-d8c9d7670be3/hoodie-mr-0.2.1.jar"
2017-03-29 23:10:57,963 INFO [main]: org.apache.hadoop.hive.ql.log.PerfLogger:
<PERFLOG method=TezBuildDag from=org.apache.hadoop.hive.ql.exec.tez.TezTask>
2017-03-29 23:10:57,963 INFO [main]: org.apache.hadoop.hive.ql.log.PerfLogger:
<PERFLOG method=TezCreateVertex.File Merge
from=org.apache.hadoop.hive.ql.exec.tez.TezTask>
2017-03-29 23:10:57,966 INFO [main]: hive.ql.Context: New scratch dir is
viewfs://ns-default/tmp/hive_scratchdir/yizhang/5da3d082-33b3-4194-97e2-005549d1b3c4/hive_2017-03-29_23-09-55_489_4030642791346631679-1
2017-03-29 23:10:57,970 DEBUG [main]: org.apache.hadoop.hdfs.DFSClient:
/tmp/hive_scratchdir/yizhang/5da3d082-33b3-4194-97e2-005549d1b3c4/hive_2017-03-29_23-09-55_489_4030642791346631679-1/yizhang/_tez_scratch_dir/b3356fae-c055-4980-a2ef-7603d162fab6:
masked=rwxr-xr-x
2017-03-29 23:10:57,970 DEBUG [IPC Parameter Sending Thread #0]:
org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
sending #377
2017-03-29 23:10:57,972 DEBUG [IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from
yizhang]: org.apache.hadoop.ipc.Client: IPC Client (85121323) connection to
hadooplithiumnamenode01-sjc1.prod.uber.internal/10.67.143.155:8020 from yizhang
got value #377
2017-03-29 23:10:57,972 DEBUG [main]: org.apache.hadoop.ipc.ProtobufRpcEngine:
Call: mkdirs took 2ms
2017-03-29 23:10:57,974 INFO [main]:
org.apache.hadoop.hive.ql.exec.tez.DagUtils: Vertex has custom input? false
2017-03-29 23:10:57,975 ERROR [main]: org.apache.hadoop.hive.ql.exec.Task:
Failed to execute tez graph.
java.lang.IllegalArgumentException: Wrong FS:
hdfs://nameservice1/tmp/hive_stagingdir/yizhang_hive_2017-03-29_23-09-55_489_4030642791346631679-1/_tmp.-ext-10001,
expected: viewfs://ns-default/
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:657)
at
org.apache.hadoop.fs.viewfs.ViewFileSystem.getUriPath(ViewFileSystem.java:117)
at
org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:346)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:576)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1073)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:329)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:154)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1903)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1630)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1392)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1180)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1168)
at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-03-29 23:10:57,987 ERROR [main]: org.apache.hadoop.hive.ql.Driver: FAILED:
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
2017-03-29 23:10:57,987 DEBUG [main]: org.apache.hadoop.hive.ql.Driver:
Shutting down query --explain
create table raw_trifle_tmp stored as orc as
SELECT max(trans_amount) as max_trans_amount, profile_uuid, state, gateway_name
FROM yizhang_prod1.raw_trifle_parq
GROUP BY profile_uuid, state, gateway_name
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)