[
https://issues.apache.org/jira/browse/MAPREDUCE-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13164483#comment-13164483
]
Robert Joseph Evans commented on MAPREDUCE-3512:
------------------------------------------------
Unless we are some how stopping Tasks from doing any work until the event it
written out to the history file, batching up the writes will reduce the number
of tasks that have to rerun on AM Recovery. This is because we already have the
events batched in the queue and if we crash while they are in the queue we
cannot recover them.
Perhaps what we want to do is to have a non-blocking check of the event queue
so we can batch all events currently on the queue up to a given number of
events in a single write. This way if there are not very many events we do
more writes and the events are output quickly but if we start to fall behind in
the writes then we start batching them up into bigger chunks which are more
efficient.
> Batch jobHistory disk flushes
> -----------------------------
>
> Key: MAPREDUCE-3512
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3512
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Components: mr-am, mrv2
> Affects Versions: 0.23.0
> Reporter: Siddharth Seth
>
> The mr-am flushes each individual job history event to disk for AM recovery.
> The history even handler ends up with a significant backlog for tests like
> MAPREDUCE-3402.
> History events could be batched up based on num records / time /
> TaskFinishedEvents to reduce the number of DFS writes - with the potential
> drawback of having to rerun some tasks during AM recovery.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira