Changing the jars I REGISTER to phoenix-2.1.2.jar (no client jar), I still
get the same error:

2014-02-12 17:45:52,262 [main] ERROR org.apache.pig.tools.grunt.GruntParser
- ERROR 2997: Unable to recreate exception from backed error: Error: Found
interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
expected

2014-02-12 17:45:52,262 [main] ERROR org.apache.pig.tools.grunt.GruntParser
- org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable
to recreate exception from backed error: Error: Found interface
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getErrorMessages(Launcher.java:217)

at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:149)

at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:400)

at org.apache.pig.PigServer.launchPlan(PigServer.java:1266)

at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1251)

at org.apache.pig.PigServer.execute(PigServer.java:1241)

at org.apache.pig.PigServer.executeBatch(PigServer.java:335)

at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:137)

at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)

at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)

at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)

at org.apache.pig.Main.run(Main.java:604)

at org.apache.pig.Main.main(Main.java:157)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:616)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)


On Wed, Feb 12, 2014 at 5:23 PM, James Taylor <[email protected]>wrote:

> Phoenix will work with either Hadoop1 or Hadoop2. The
> phoenix-<version>-client.jar bundles the Hadoop1 jars, so if you want to
> use Hadoop2, don't use that jar. Instead you can use the
> phoenix-<version>.jar and include any other required jars on the classpath.
> On the client-side Phoenix depends on antlr and opencsv (if your doing bulk
> loading).
>
> Thanks,
> James
>
>
> On Wed, Feb 12, 2014 at 5:10 PM, Russell Jurney 
> <[email protected]>wrote:
>
>> I am using CDH 4.4, with HBase hbase-0.94.6+132 and pig-0.11.0+33. My
>> Hadoop client lib is hadoop-2.0.0+1475.
>>
>> So it looks like my Pig is MR2, but Phoenix is expecting MR1?
>>
>> I'm not really sure how to go about resolving this issue. CDH is a bit of
>> a black box - I don't know if their Pig is using MR1/2. And I don't have
>> source to recompile it.
>>
>> It looks like my Pig is using
>>
>>
>> On Tue, Feb 11, 2014 at 11:12 PM, Prashant Kommireddi <
>> [email protected]> wrote:
>>
>>> Yup, that seems like a classpath issue. Also, make sure to compile pig
>>> with the correct hadoop version if you are using the fat jar.
>>>
>>>
>>> On Tue, Feb 11, 2014 at 9:05 PM, Skanda <[email protected]>wrote:
>>>
>>>> Hi Russell,
>>>>
>>>> Which version of HBase and Hadoop are you using? The reason for this
>>>> issue is that TaskAttemptContext is an interface in Hadoop 2.x but is a
>>>> class in Hadoop 1.x.
>>>>
>>>> Regards,
>>>> Skanda
>>>>
>>>>
>>>> On Wed, Feb 12, 2014 at 10:06 AM, James Taylor 
>>>> <[email protected]>wrote:
>>>>
>>>>> This is beyond my knowledge of Pig, but Prashant may know as he
>>>>> contributed our Pig integration.
>>>>>
>>>>> Thanks,
>>>>> James
>>>>>
>>>>>
>>>>> On Tue, Feb 11, 2014 at 4:34 PM, Russell Jurney <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> I am trying to store data into this table:
>>>>>>
>>>>>> CREATE TABLE IF NOT EXISTS BEACONING_ACTIVITY  (
>>>>>>
>>>>>> EVENT_TIME VARCHAR NOT NULL,
>>>>>> C_IP VARCHAR NOT NULL,
>>>>>> CS_HOST VARCHAR NOT NULL,
>>>>>>  SLD  VARCHAR NOT NULL,
>>>>>> CONFIDENCE DOUBLE NOT NULL,
>>>>>> RISK DOUBLE NOT NULL,
>>>>>>  ANOMOLY DOUBLE NOT NULL,
>>>>>> INTERVAL DOUBLE NOT NULL
>>>>>>
>>>>>> CONSTRAINT PK PRIMARY KEY (EVENT_TIME, C_IP, CS_HOST)
>>>>>> );
>>>>>>
>>>>>>
>>>>>> Using this Pig:
>>>>>>
>>>>>> hosts_and_risks = FOREACH hosts_and_anomaly GENERATE hour, c_ip,
>>>>>> cs_host, sld, confidence, (confidence * anomaly) AS risk:double, anomaly,
>>>>>> interval;
>>>>>> --hosts_and_risks = ORDER hosts_and_risks BY risk DESC;
>>>>>> --STORE hosts_and_risks INTO '/tmp/beacons.txt';
>>>>>> STORE hosts_and_risks into 'hbase://BEACONING_ACTIVITY' using
>>>>>> com.salesforce.phoenix.pig.PhoenixHBaseStorage('hiveapp1','-batchSize
>>>>>> 5000');
>>>>>>
>>>>>> And the most helpful error message I get is this:
>>>>>>
>>>>>> 2014-02-11 16:24:13,831 FATAL org.apache.hadoop.mapred.Child: Error 
>>>>>> running child : java.lang.IncompatibleClassChangeError: Found interface 
>>>>>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>>>>>  at 
>>>>>> com.salesforce.phoenix.pig.hadoop.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:75)
>>>>>>  at 
>>>>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
>>>>>>  at 
>>>>>> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:597)
>>>>>>  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444)
>>>>>>  at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>>>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>>  at 
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>>>  at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>>>>>
>>>>>>
>>>>>> What am I to do?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Russell Jurney twitter.com/rjurney [email protected]
>>>>>> datasyndrome.com
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Russell Jurney twitter.com/rjurney [email protected] datasyndrome.
>> com
>>
>
>


-- 
Russell Jurney twitter.com/rjurney [email protected] datasyndrome.com

Reply via email to