Thank you. This explains why they appear to be running - they are queued.

Mark

On Tue, Dec 29, 2009 at 11:30 AM, abhishek sharma <[email protected]>wrote:

> Hi Mark,
>
> When you submit multiple jobs to the same cluster, these jobs are
> queued up at the jobtracker, and executed in FIFO order.
>
> Based on my understanding of the Hadoop FIFO scheduler, the order in
> which jobs get executed is determined by two things: (1) priority of
> the job. All jobs have the NORMAL priority by default, (2) the start
> time of the job.  So in a scenario where all jobs have the same
> priority, they will be executed in the order in which they arrive at
> the cluster.
>
> If you submit multiple jobs, there is some initial processing that is
> done before the job gets executed at the end of which a message
> "Running job"+JOBID is printed. At this point, the job has been queued
> up at the jobtracker awaiting execution.
>
> Hadoop also comes with other types of scheduler, for example, the Fair
> Scheduler (
> http://hadoop.apache.org/common/docs/current/fair_scheduler.html).
>
> Hope this helps,
> Abhishek
>
> On Tue, Dec 29, 2009 at 12:16 PM, Mark Kerzner <[email protected]>
> wrote:
> > Hi,
> >
> > what happens when I submit a few jobs on the cluster? To me, it seems
> like
> > they all are running - which I know can't be, because I only have 2
> slaves.
> > Where do I read about this?
> >
> > I am using Cloudera with EC2.
> >
> > Thank you,
> > Mark
> >
>

Reply via email to