Is there any way I can programmatically kill or fail a task, preferably from inside a Mapper or Reducer?
At any time during a map or reduce task, I have a use case where I know it won't succeed based solely on the machine it is running on. It is rare, but I would prefer to kill the task and have Hadoop start it up on a different machine as usual instead of waiting for the 10 minute default timeout. I suppose the speculative execution could take care of it, but I would rather not rely on it if I am able to kill it myself. Thanks, Adam
