Agreed. But how did I manage to get 54 killed tasks vs 62 killed
task-attempts? I understand what a "failed task" is (a task for which
'mapred.map.max.attempts' attempts have failed). But what's a "killed task"?

On Mon, Aug 3, 2009 at 6:41 PM, Enis Soztutar <[email protected]> wrote:

> Hi,
>
> Task attempt is an attempt to a task. At any given time, one or
> more(speculative exec.) of task attempts can be running. For a task, there
> can be many attempts at different nodes. A task is complete if any of its
> attempts is complete.  For a task to be marked as failed all of
> mapred.map.max.attempts should fail. For every task in the job, a TaskID is
> assigned. For every attempt, a TaskAttemptID is assigned (which ends with
> _0, _1, etc).
>
>
> Harish Mallipeddi wrote:
>
>> Hi,
>>
>> Anyone can tell me what's the difference between "Killed Task Attempts"
>> and
>> "Killed Tasks"? I ran a big job (14820 maps and 0 reduces). In the
>> job-details page, the web GUI reports 62 "killed task attempts". I'm
>> assuming this is due to "speculative execution". Now when I go to the
>> job-history page for the job, it reports 54 "killed tasks" (and 14820
>> successful map-tasks as expected).
>>
>> A few questions:
>>
>> * Why 62 killed task attempts vs 54 killed tasks?
>> * Under speculative execution, does hadoop launch a new MapTask with new
>> task-id or does it just launch a new MapTaskAttempt with a new
>> task-attempt-id?
>> * When a MapTaskAttempt fails, and when hadoop tries to re-launch the
>> MapTask, does it create a new task-id or just a new task-attempt-id?
>> * Does 'mapred.map.max.attempts' include all attempts launched due to
>> speculative-execution?
>>
>> Btw this job is basically a trivial no-op job - it just scans around 1TB
>> of
>> data and does nothing else in the map. I looked at the killed tasks'
>> syslog
>> output and I didn't see any errors.
>>
>>
>>
>
>


-- 
Harish Mallipeddi
http://blog.poundbang.in

Reply via email to