I see that in trunk there is a provision to build avro with hadoop2/hadoop1
profiles. So i guess this is no longer a bug in trunk.
Hence i built avro from trunk with
mvn clean install -DskipTests=true eclipse:clean eclipse:eclipse -P hadoop2
I navigated to org.apache.avro.mapreduce.AvroRecordReaderBase using eclipse
and clicked on import org.apache.hadoop.mapreduce.TaskAttemptContext. This
is still pointing me to hadoop.20.205 library instead of hadoop2.x client
library.

Am i doing something wrongly ?


On Mon, May 12, 2014 at 8:51 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <[email protected]> wrote:

> Thanks.
> https://issues.apache.org/jira/browse/AVRO-1506
>
> I can take it up.
>
>
> On Mon, May 12, 2014 at 7:22 AM, Lewis John Mcgibbney <
> [email protected]> wrote:
>
>> My guess is that this is Avro side. We've seen similar traces with Nutch.
>> This looks like a JIRA ticket.
>> On May 11, 2014 4:53 PM, "Deepak" <[email protected]> wrote:
>>
>>>
>>>
>>> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <[email protected]> wrote:
>>>
>>> Exception:
>>>
>>> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found
>>> interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
>>> expected
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
>>>
>>> Caused by: java.lang.IncompatibleClassChangeError: Found interface
>>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>>
>>> at
>>> org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
>>>
>>> at
>>> com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
>>>
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
>>>
>>> at
>>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
>>>
>>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>>>
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>>>
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>
>>> at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>> Imports used in my recordreader class.
>>>
>>> import org.apache.avro.Schema;
>>>
>>> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
>>>
>>> import org.apache.hadoop.mapreduce.InputSplit;
>>>
>>> import org.apache.hadoop.mapreduce.TaskAttemptContext;
>>>
>>> Any suggestions ? Or does this require a fix from Avro ?
>>>
>>> Regards,
>>>
>>> Deepak
>>>
>>>
>
>
> --
> Deepak
>
>


-- 
Deepak

Reply via email to