It works when queue name is specified in the argument as following. pig -Dmapred.job.queue.name=*myqueue* test.pig
Thanks Dmitriy! On Wed, Dec 2, 2009 at 6:54 AM, Dmitriy Ryaboy <[email protected]> wrote: > [I sent this off-list, and just got word that it worked. Resending it > here so that it's archived and can help make benefit people who might > have a similar problem in the future.] > > Looks like you problem is the scheduler. I think M45 uses the > "capacity scheduler", which defines different queues that jobs can run > in; by default, the "default" queue is used. If the M45 cluster is > configured so that there is no such thing as the "default" queue, then > not specifying any queue will naturally lead to being scheduled into a > queue that doesn't exist, which will blow up. Capacity scheduler > documentation is here: > http://hadoop.apache.org/common/docs/r0.20.0/capacity_scheduler.html > > I don't know how M45 queues are created -- perhaps they do an > individual queue by user name or something. I think you get an intro > email when you get access, it should be in there somewhere. The > important part is that all of your jobs need to have the proper queue > set, through the property mapred.job.queue.name=myqueue (I think you > can set this in the pig.properties file). > > -D > > > On Mon, Nov 23, 2009 at 9:45 PM, Haiyi Zhu <[email protected]> wrote: > > > > Ok, I see. Thx! > > > > On Mon, Nov 23, 2009 at 9:40 PM, Olga Natkovich <[email protected]> > wrote: > > > > > You need to find out what the version is and if it is Hadoop 18, you > can > > > use Pig 0.4.0 release. > > > > > > Olga > > > > > > -----Original Message----- > > > From: Haiyi Zhu [mailto:[email protected]] > > > Sent: Monday, November 23, 2009 6:38 PM > > > To: [email protected] > > > Subject: Re: Error under Mapreduce Mode > > > > > > I am using M45 clusters. I am not sure what version they get... > > > > > > On Mon, Nov 23, 2009 at 10:59 AM, Olga Natkovich > > > <[email protected]>wrote: > > > > > > > What version of Hadoop and what version of Pig are you using. Based > > > from > > > > the error I assume you are using Pig 0.5.0 that requires a hadoop 20 > > > > cluster. > > > > > > > > Olga > > > > > > > > -----Original Message----- > > > > From: Haiyi Zhu [mailto:[email protected]] > > > > Sent: Monday, November 23, 2009 7:01 AM > > > > To: [email protected] > > > > Subject: Error under Mapreduce Mode > > > > > > > > Hi all, > > > > > > > > New to pig. The simplest "load" and then "dump" does not work under > > > > Mapreduce mode :-( > > > > Here is the error information I get. I am wondering what "Queue > > > > 'default' > > > > does not exist" means. > > > > > > > > 2009-11-22 21:53:25,317 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryO > > > > ptimizer > > > > - MR plan size before optimization: 1 > > > > 2009-11-22 21:53:25,317 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryO > > > > ptimizer > > > > - MR plan size after optimization: 1 > > > > 2009-11-22 21:53:27,282 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlC > > > > ompiler > > > > - Setting up single store job > > > > 2009-11-22 21:53:27,384 [Thread-66] WARN > > > > org.apache.hadoop.mapred.JobClient > > > > - Use GenericOptionsParser for parsing the arguments. Applications > > > > should > > > > implement Tool for the same. > > > > 2009-11-22 21:53:27,855 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLa > > > > uncher > > > > - 0% complete > > > > 2009-11-22 21:54:01,499 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLa > > > > uncher > > > > - 100% complete > > > > 2009-11-22 21:54:01,500 [main] ERROR > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLa > > > > uncher > > > > - 1 map reduce job(s) failed! > > > > 2009-11-22 21:54:01,607 [main] ERROR > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLa > > > > uncher > > > > - Failed to produce result in: "hdfs:// > > > > grit-nn1.yahooresearchcluster.com/tmp/temp866681642/tmp383950780" > > > > 2009-11-22 21:54:01,607 [main] INFO > > > > > > > > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLa > > > > uncher > > > > - Failed! > > > > 2009-11-22 21:54:01,608 [main] ERROR org.apache.pig.tools.grunt.Grunt > > > - > > > > ERROR 2997: Unable to recreate exception from backend error: > > > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: Queue > > > > "default" > > > > does not exist > > > > > > > > Thanks! > > > > Haiyi > > > > > > > > -- > > > > Haiyi ZHU > > > > Human Computer Interaction Institute > > > > Carnegie Mellon University, Pittsburgh > > > > E-mail:[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>> < > > > e-mail%[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>>> < > > > > e-mail%[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>> < > > > e-mail%[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>>>>, > > > > [email protected] > > > > > > > > > > > > > > > > -- > > > Haiyi ZHU > > > Human Computer Interaction Institute > > > Carnegie Mellon University, Pittsburgh > > > E-mail:[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>> < > > > e-mail%[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>>>, > > > [email protected] > > > > > > > > > > > -- > > Haiyi ZHU > > Human Computer Interaction Institute > > Carnegie Mellon University, Pittsburgh > > E-mail:[email protected] <e-mail%[email protected]> < > e-mail%[email protected] <e-mail%[email protected]>>, > > [email protected] > -- Haiyi ZHU Human Computer Interaction Institute Carnegie Mellon University, Pittsburgh E-mail:[email protected] <e-mail%[email protected]>, [email protected]
