Hi,

Please check the value of mapreduce.map.maxattempts and
mapreduce.reduce.maxattempts. If you'd like to ignore the error only
in specific jobs, it's useful to use -D option to change the
configuration as follows:

bin/hadoop jar job.jar -Dmapreduce.map.maxattempts=10

Thanks,
- Tsuyoshi

On Tue, Aug 19, 2014 at 2:57 AM, Susheel Kumar Gadalay
<skgada...@gmail.com> wrote:
> Check the parameter yarn.app.mapreduce.client.max-retries.
>
> On 8/18/14, parnab kumar <parnab.2...@gmail.com> wrote:
>> Hi All,
>>
>>        I am running a job where there are between 1300-1400 map tasks. Some
>> map task fails due to some error. When 4 such maps fail the job naturally
>> gets killed.  How to  ignore the failed tasks and go around executing the
>> other map tasks. I am okay with loosing some data for the failed tasks.
>>
>> Thanks,
>> Parnab
>>



-- 
- Tsuyoshi

Reply via email to