It might be better to keep a counter of bad data and terminate regularly.
I would be hesitant to shoot down the mother-ship.

Mehmet

On Apr 6, 2011, at 5:40 PM, Haruyasu Ueda wrote:

> Hi all,
> 
> I'm writing M/R java program.
> 
> I want to abort a job itself in a map task, when the map task found
> irregular data.
> 
> I have two idea to do so.
> 1. execulte "bin/hadoop -kill jobID" in map task, from slave machine.
> 2. raise an IOException to abort.
> 
> I want to know which is better way. 
> Or, whether there is  better/recommended programming idiom.
> 
> If you have any experience about this, please share your case.
> 
> --HAL
> ========================================================================
> Haruyasu Ueda, Senior Researcher
>  Research Center for Cloud Computing
>  FUJITSU LABORATORIES LTD.
> E-mail: [email protected]
> Tel: +81 44 754 2575
> Ken-S602, 4-1-1, Kamikodanaka, Nakahara-ku, Kawasaki, 211-8588 Japan
> ========================================================================
> 
> 

Reply via email to