Can you send along your script and the reduce task logs? What version of Pig and Hadoop are you using?
Thanks, -Dmitriy On Sun, Dec 12, 2010 at 10:36 PM, <[email protected]> wrote: > Hi, > > I loaded a csv file with about 10 fields into PigStorage and tried to do a > GROUP BY on one of the fields. The MapReduce job gets created, and the > Mappers finish execution. > > But after that, the job fails with the following error messages: > > 2010-12-13 10:31:08,902 [main] INFO > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher > - 100% complete > 2010-12-13 10:31:08,902 [main] ERROR > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher > - 1 map reduce job(s) failed! > 2010-12-13 10:31:08,911 [main] ERROR > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher > - Failed to produce result in: > "hdfs://hadoop.namenode:54310/tmp/temp2041073534/tmp-2060206542" > 2010-12-13 10:31:08,911 [main] INFO > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher > - Failed! > 2010-12-13 10:31:08,961 [main] ERROR org.apache.pig.tools.grunt.Grunt - > ERROR 6015: During execution, encountered a Hadoop error. > 2010-12-13 10:31:08,961 [main] ERROR org.apache.pig.tools.grunt.Grunt - > org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to > open iterator for alias grouped_records > at org.apache.pig.PigServer.openIterator(PigServer.java:521) > at > org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:544) > at > org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:241) > at > org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:162) > at > org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:138) > at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:75) > at org.apache.pig.Main.main(Main.java:357) > Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR > 6015: During execution, encountered a Hadoop error. > at > .util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768) > at > .apache.hadoop.mapred.ReduceTask(ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) > Caused by: java.lang.NullPointerException > ... 2 more > > The filter statements (Mapper only) work properly, so it's not that nothing > is running. > > What's the issue here? > > Please do not print this email unless it is absolutely necessary. > > The information contained in this electronic message and any attachments to > this message are intended for the exclusive use of the addressee(s) and may > contain proprietary, confidential or privileged information. If you are not > the intended recipient, you should not disseminate, distribute or copy this > e-mail. Please notify the sender immediately and destroy all copies of this > message and any attachments. > > WARNING: Computer viruses can be transmitted via email. The recipient > should check this email and any attachments for the presence of viruses. The > company accepts no liability for any damage caused by any virus transmitted > by this email. > > www.wipro.com >
