Adam, You can use RunningJob.killTask(TaskAttemptID taskId, boolean shouldFail) API to kill the task.
Clients can get hold of RunningJob via the JobClient and then use running-job for killing the task etc. Refer API doc : http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/Ru nningJob.html#killTask(org.apache.hadoop.mapred.TaskAttemptID, boolean) Devaraj K -----Original Message----- From: Aleksandr Elbakyan [mailto:[email protected]] Sent: Thursday, August 04, 2011 5:10 AM To: [email protected] Subject: Re: Kill Task Programmatically Hello, You can just throw run time exception. In that case it will fail :) Regards, Aleksandr --- On Wed, 8/3/11, Adam Shook <[email protected]> wrote: From: Adam Shook <[email protected]> Subject: Kill Task Programmatically To: "[email protected]" <[email protected]> Date: Wednesday, August 3, 2011, 3:33 PM Is there any way I can programmatically kill or fail a task, preferably from inside a Mapper or Reducer? At any time during a map or reduce task, I have a use case where I know it won't succeed based solely on the machine it is running on. It is rare, but I would prefer to kill the task and have Hadoop start it up on a different machine as usual instead of waiting for the 10 minute default timeout. I suppose the speculative execution could take care of it, but I would rather not rely on it if I am able to kill it myself. Thanks, Adam
