being paged out is sad - but the worst case is still no worse than killing the 
job (where all the data has to be *recomputed* back into memory on restart - 
not just swapped in from disk)
 
the best and average cases are likely way better ..
 
(disk capacity seems no issue at all - but perhaps we are blessed to be in this 
state).

________________________________

From: Doug Cutting [mailto:[EMAIL PROTECTED]
Sent: Thu 1/10/2008 2:24 PM
To: hadoop-user@lucene.apache.org
Subject: Re: Question on running simultaneous jobs



Joydeep Sen Sarma wrote:
> can we suspend jobs (just unix suspend) instead of killing them?

We could, but they'd still consume RAM and disk.  The RAM might
eventually get paged out, but relying on that is probably a bad idea.
So, this could work for tasks that don't use much memory and whose
intermediate data is small, but that's frequently not the case.

Doug


Reply via email to