Hello,

You can just throw run time exception. In that case it will fail :)

Regards,
Aleksandr 

--- On Wed, 8/3/11, Adam Shook <[email protected]> wrote:

From: Adam Shook <[email protected]>
Subject: Kill Task Programmatically
To: "[email protected]" <[email protected]>
Date: Wednesday, August 3, 2011, 3:33 PM

Is there any way I can programmatically kill or fail a task, preferably from 
inside a Mapper or Reducer?

At any time during a map or reduce task, I have a use case where I know it 
won't succeed based solely on the machine it is running on.  It is rare, but I 
would prefer to kill the task and have Hadoop start it up on a different 
machine as usual instead of waiting for the 10 minute default timeout.

I suppose the speculative execution could take care of it, but I would rather 
not rely on it if I am able to kill it myself.

Thanks,
Adam

Reply via email to