Should just work. Did it start a mapreduce job? Can you get task failure or job setup failure logs?
On Thu, Dec 15, 2011 at 7:14 AM, Rohini U <[email protected]> wrote: > Hi, > > I am trying to use Pig 0.9.1 with CDH3u1 packaged hadoop. I compiled pig > without hadoop jars to avoid conflicts, and using that jar to run pig jobs. > Thigns are running fine in local mode but on the cluster, I get the > following error: > > 2011-12-15 05:15:49,641 [main] INFO > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher > - Failed! > 2011-12-15 05:15:49,643 [main] ERROR org.apache.pig.tools.grunt.GruntParser > - ERROR 2997: Unable to recreate exception from backed error: > java.io.EOFExceptio > at > > java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2281) > at > > java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2750) > at > java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:780) > at java.io.ObjectInputStream.<init>(ObjectInputStream.java:280) > at > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.readObject(PigSplit.java:264) > at > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.readFields(PigSplit.java:209) > at > > org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67) > at > > org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40) > at > org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:349) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:611) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) > at org.apache.hadoop.mapred.Child$4.run(Child.java:270) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) > at org.apache.hadoop.mapred.Child.main(Child.java:264) > > Any insights what might be wrong? > > Is there any way we can make pig 0.9 work with CDH3 u1 version of hadoop? > > Thanks > -Rohini >
