[ 
https://issues.apache.org/jira/browse/LENS-212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285347#comment-14285347
 ] 

Aniruddha Gangopadhyay commented on LENS-212:
---------------------------------------------

Pasting the contents of <hive-operation-handle>.*  :

<hive-operation-handle>.out :
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1421823232170_0001, Tracking URL = 
http://ceed32cb50ed:8088/proxy/application_1421823232170_0001/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1421823232170_0001
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2015-01-21 07:02:35,331 Stage-1 map = 0%,  reduce = 0%
2015-01-21 07:03:00,149 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 10.66 
sec
2015-01-21 07:03:11,788 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
13.01 sec
MapReduce Total cumulative CPU time: 13 seconds 10 msec
Ended Job = job_1421823232170_0001
Moving data to: /tmp/lensreports/hdfsout/65964a44-4170-4ec8-b1ae-c3ff07546eb7
MapReduce Jobs Launched: 
Job 0: Map: 3  Reduce: 1   Cumulative CPU: 13.01 sec   HDFS Read: 831 HDFS 
Write: 3 SUCCESS
Total MapReduce CPU Time Spent: 13 seconds 10 msec

<hive-operation-handle>.err :
Failed with exception Unable to rename: 
hdfs://ceed32cb50ed:9000/tmp/hive-root/hive_2015-01-21_07-02-17_990_4243360662136958290-1/-ext-10000
 to: /tmp/lensreports/hdfsout/65964a44-4170-4ec8-b1ae-c3ff07546eb7
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MoveTask


> Queries for local/cluster storage tables are errorring out
> ----------------------------------------------------------
>
>                 Key: LENS-212
>                 URL: https://issues.apache.org/jira/browse/LENS-212
>             Project: Apache Lens
>          Issue Type: Bug
>          Components: examples
>            Reporter: Aniruddha Gangopadhyay
>
> On executing the following query using len-cli (running lens using docker): 
> query execute select * from local_fact1
> I am getting the following exception:
> Query failed with errorCode:1 with errorMessage: Error while processing 
> statement: FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask cause:Stage-0(MOVE):Stage-0: has 
> failed! Stage-0(MOVE):Stage-0: has failed! Stage-0(MOVE):Stage-0: has failed! 
> Stage-0(MOVE):Stage-0: has failed! 
> Corresponding error logs from hive.log :
> Failed with exception Unable to rename: 
> hdfs://a1cf79816353:9000/tmp/hive-root/hive_2015-01-20_10-35-
> 19_557_1696101339865196124-1/-ext-10000 to: 
> /tmp/lensreports/hdfsout/d9ee6634-ca74-4344-87d4-47e1111ddf59
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://a1cf79816353:9000/tmp/hive-root/hive_2015-01-20_10-35-19_557_1696101339865196124-1/-ext-10000
>  to: /tmp/l
> ensreports/hdfsout/d9ee6634-ca74-4344-87d4-47e1111ddf59
>         at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:99)
>         at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:198)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:173)
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1491)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1250)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1062)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:885)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:158)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:76)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:215)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>         at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:500)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:224)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> This issue is with the HDFSStorage only, tables created with DBStorage are 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to