On 04/06/2011 08:40 PM, Haruyasu Ueda wrote:
Hi all,
I'm writing M/R java program.
I want to abort a job itself in a map task, when the map task found
irregular data.
I have two idea to do so.
1. execulte "bin/hadoop -kill jobID" in map task, from slave machine.
2. raise an IOException to abort.
I want to know which is better way.
Or, whether there is better/recommended programming idiom.
If you have any experience about this, please share your case.
--HAL
I'd go with throwing the exception. That way the cause of the job crash
will get displayed right in the Hadoop GUI.
DR