Apologies, I overlooked "One" associated with Part A of the question, and
answered it for the case when cluster is out disk space.

Part A

   1. Job would not fail if there are more nodes where the task can be run
   (Hadoop places the task on other nodes when a particular node goes down)
   2. Similarly, the new job would use other nodes
   3. That node will be blacklisted after a few failures (depends on
   mapred.max.tracker.blacklists, default is 4)


On Sun, Feb 12, 2012 at 10:25 PM, Dmitriy Ryaboy <[email protected]> wrote:

> If just one (of many) nodes is full, job won't fail, though individual
> tasks might, and will get re-run elsewhere.
> Obviously that introduces unhappiness into your cluster, so avoid that.
> It's *really* bad for HBase.
>
> D
>
> On Sun, Feb 12, 2012 at 9:36 PM, jagaran das <[email protected]
> >wrote:
>
> >
> >
> >
> > ----- Forwarded Message -----
> > From: jagaran das <[email protected]>
> > To: "[email protected]" <[email protected]>
> > Sent: Sunday, 12 February 2012 9:33 PM
> > Subject: Hadoop Cluster Question
> >
> >
> > Hi,
> > A. If One of the Slave Node local disc space is full in a cluster ?
> >
> > 1. Would a already started running Pig job fail ?
> > 2. Any new started pig job would fail ?
> > 3. How would the Hadoop Cluster Behave ? Would that be a dead node ?
> >
> > B. In our production cluster we are seeing one of the slave node is being
> > more utilized than the others.
> > By Utilization I mean the %DFS is always more in it. How can we balance
> it
> > ?
> >
> > Thanks,
> > Jagaran
>

Reply via email to