The reason is that there is an API change from hadoop 0.17 to hadoop 0.19.

Please try:ant -Dtarget.dir=/hive -Dhadoop.version='0.17.0' package
You can still run hive against hadoop 0.18.3.

I think it should solve the problem. In the meanwhile, can you open a jira
on https://issues.apache.org/jira/browse/HIVE ?
We should get this fixed.

Zheng

On Mon, Feb 23, 2009 at 7:57 PM, hc busy <[email protected]> wrote:

>
> Also, we tried branch-0.2 and trunk against 0.18.2 and 0.18.3; We saw a few
> notes on this error before related building against proper hadoop version
> but that didn't seem to help here, what else could be causing this?
>
>
>
>
> On Mon, Feb 23, 2009 at 7:48 PM, hc busy <[email protected]> wrote:
>
>> *Setting*:
>> Hadoop 0.18.3, on several nodes that is able to run mr jobs.
>> Hive trunk built with "ant -Dtarget.dir=/hive -Dhadoop.version='0.18.3'
>> package", and then deployed by copying build/dist/* to /hive;
>> $HADOOP_HOME, $HADOOP, $HIVE_HOME are all configured correctly.
>>
>> I imported some list of English words into a table called english. It is a
>> table with one column of string that I can do 'select * from english;' BUT!!
>> the following fails. Can anybody help?
>>
>> .
>> .
>> .
>> courtesan
>> courtesanry
>> courtesans
>> courtesanship
>> courtesied
>> courtesies
>> courtesy
>> courtesy
>> Time taken: 4.584 seconds
>> *hive*> *select count(1) from english;*
>>
>> Total MapReduce jobs = 2
>> Number of reduce tasks not specified. Defaulting to jobconf value of: 16
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> java.lang.AbstractMethodError:
>> org.apache.hadoop.hive.ql.io.HiveInputFormat.validateInput(Lorg/apache/hadoop/mapred/JobConf;)V
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:735)
>>         at
>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:391)
>>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:238)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:174)
>>         at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:207)
>>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:306)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>>         at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>         at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
>>
>>
>


-- 
Yours,
Zheng

Reply via email to