This seems conf issue. Have you included all conf path and libs to run hive?


Sent from my Galaxy S®III



-------- Original message --------
From: Beau Rothrock <beau.rothr...@lookout.com>
Date: 06/13/2013 7:25 PM (GMT-06:00)
To: "<user@hive.apache.org>" <user@hive.apache.org>
Subject: Re: Hive .11 Won't Reduce


Thanks, but ACLs are disabled.

<property>
  <name>mapred.acls.enabled</name>
  <value>false</value>
</property>

--
Beau


On Jun 13, 2013, at 2:48 PM, Arafat, Moiz wrote:

> Can you See if the id which you are using to run the queries has map reduce 
> permissions on the cluster ?
>
> Regards,
> Moiz Arafat
>
>
> On Jun 13, 2013, at 1:55 PM, Beau Rothrock <beau.rothr...@lookout.com> wrote:
>
>> Hi Y'all,
>>
>> I can connect to hive just fine. I can select rows just fine, but anything 
>> that appears to require map/reduce crashes consistently. Anyone know of a 
>> fix or workaround for this?
>>
>> I'm using these:
>> hive-0.11.0-bin
>> hadoop-2.0.0-mr1-cdh4.2.0
>>
>>
>> hive> select count(*) from lookout_analytics.xfm_countries;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_201306101539_0199, Tracking URL = 
>> http://sf1r005s034:50030/jobdetails.jsp?jobid=job_201306101539_0199
>> Kill Command = /Users/brothrock/hadoop/bin/hadoop job  -kill 
>> job_201306101539_0199
>> Hadoop job information for Stage-1: number of mappers: 1; number of 
>> reducers: 1
>> 2013-06-13 10:37:42,051 Stage-1 map = 0%,  reduce = 0%
>> 2013-06-13 10:38:10,126 Stage-1 map = 100%,  reduce = 100%
>> Ended Job = job_201306101539_0199 with errors
>> Error during job, obtaining debugging information...
>> Job Tracking URL: 
>> http://sf1r005s034:50030/jobdetails.jsp?jobid=job_201306101539_0199
>> Examining task ID: task_201306101539_0199_m_000002 (and more) from job 
>> job_201306101539_0199
>> Unable to retrieve URL for Hadoop Task logs. Unable to find job tracker info 
>> port.
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>> task_201306101539_0199_m_000000
>>
>> -----
>> Diagnostic Messages for this Task:
>> java.lang.RuntimeException: java.lang.NullPointerException
>>       at 
>> org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
>>       at 
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
>>       at 
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
>>       at 
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
>>       at 
>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
>>       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:394)
>>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
>>       at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>>       at java.security.AccessController.doPrivileged(Native Method)
>>       at javax.security.auth.Subject.doAs(Subject.java:415)
>>       at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>       at org.apache.hadoop.mapred.Child.main(Child.java:262)
>> Caused by: java.lang.NullPointerException
>>       at org.ap
>>
>> FAILED: Execution Error, return code 2 from 
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
>>
>>
>> --
>> Beau
>>
>>
>

This message, including any attachments, is the property of Sears Holdings 
Corporation and/or one of its subsidiaries. It is confidential and may contain 
proprietary or legally privileged information. If you are not the intended 
recipient, please delete it without reading the contents. Thank you.

Reply via email to