Also, one more thing... Don't build from trunk as the getting started guide suggests. You want to SVN co the following:
svn co http://svn.apache.org/repos/asf/hadoop/hive/tags/release-0.4.1-rc2/ On Mon, Nov 9, 2009 at 3:30 PM, Ryan LeCompte <[email protected]> wrote: > Hi Massoud, > > Couple things you could do: > > 1) Log into the Job Tracker console and click on the failed map/reduce > tasks to view the logs. Check for any exceptions there. > -- go to http://master:50030/jobdetails.jsp?jobid=job_200911091418_0004 > > 2) Check the Hive logs, which are configured by > build/dist/conf/hive-log4j.properties. > -- you can change the location of where the logs go by modifying > "hive.log.dir" > > Thanks, > Ryan > > > On Mon, Nov 9, 2009 at 3:25 PM, Massoud Mazar <[email protected]>wrote: > >> I thought I got my hive installation right, but apparently I was wrong. >> When I follow instruction on creating and populating and querying >> "MovieLens User Ratings" from Hive User Guide ( >> http://wiki.apache.org/hadoop/Hive/UserGuide) I get the following error: >> >> hive> SELECT COUNT(1) FROM u_data; >> Total MapReduce jobs = 1 >> Number of reduce tasks determined at compile time: 1 >> In order to change the average load for a reducer (in bytes): >> set hive.exec.reducers.bytes.per.reducer=<number> >> In order to limit the maximum number of reducers: >> set hive.exec.reducers.max=<number> >> In order to set a constant number of reducers: >> set mapred.reduce.tasks=<number> >> Starting Job = job_200911091418_0004, Tracking URL = >> http://master:50030/jobdetails.jsp?jobid=job_200911091418_0004 >> Kill Command = /usr/local/hadoop/bin/hadoop job >> -Dmapred.job.tracker=master:9001 -kill job_200911091418_0004 >> 2009-11-09 03:06:02,651 map = 100%, reduce = 100% >> Ended Job = job_200911091418_0004 with errors >> FAILED: Execution Error, return code 2 from >> org.apache.hadoop.hive.ql.exec.ExecDriver >> >> Any advise on how to troubleshoot this is very much appreciated. >> >> Regards >> Massoud >> > >
