On 05/03/2010 07:55 PM, Ersoy Bayramoglu wrote:
Hi,

I'm a new user. I have a question about aborting an ongoing mapreduce job. If
one of the mappers compute a particular value, I'd like to stop the entire job,
and give the control back to the master. Is this possible in Hadoop?

Can't you just throw an exception inside the mapper? I think that would kill the job. It wouldn't stop any job tasks currently in progress, but presumably it would prevent new tasks from getting launched.

If that's not sufficient, and you absolutely need to completely stop all currently-in-progress map tasks, then I think the only solution would be to use Zookeeper. i.e., make your map/reduce tasks periodically check for the presence of some abort node in ZK.

HTH,

DR

Reply via email to