If just one (of many) nodes is full, job won't fail, though individual
tasks might, and will get re-run elsewhere.
Obviously that introduces unhappiness into your cluster, so avoid that.
It's *really* bad for HBase.

D

On Sun, Feb 12, 2012 at 9:36 PM, jagaran das <[email protected]>wrote:

>
>
>
> ----- Forwarded Message -----
> From: jagaran das <[email protected]>
> To: "[email protected]" <[email protected]>
> Sent: Sunday, 12 February 2012 9:33 PM
> Subject: Hadoop Cluster Question
>
>
> Hi,
> A. If One of the Slave Node local disc space is full in a cluster ?
>
> 1. Would a already started running Pig job fail ?
> 2. Any new started pig job would fail ?
> 3. How would the Hadoop Cluster Behave ? Would that be a dead node ?
>
> B. In our production cluster we are seeing one of the slave node is being
> more utilized than the others.
> By Utilization I mean the %DFS is always more in it. How can we balance it
> ?
>
> Thanks,
> Jagaran

Reply via email to