[
https://issues.apache.org/jira/browse/MAPREDUCE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685669#comment-16685669
]
Hudson commented on MAPREDUCE-7158:
-----------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15420 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/15420/])
MAPREDUCE-7158. Inefficient Flush Logic in JobHistory EventWriter. (wangda: rev
762a56cc64bc07d57f94e253920534b8e049f238)
* (edit)
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
> Inefficient Flush Logic in JobHistory EventWriter
> -------------------------------------------------
>
> Key: MAPREDUCE-7158
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7158
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 3.2.0
> Reporter: Zichen Sun
> Assignee: Zichen Sun
> Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: MAPREDUCE-7158-001.patch
>
>
> In HDFS, if the flush is implemented to send server request to actually
> commit the pending writes on the storage service side, we could observe in
> the benchmark runs that the MR jobs are taking much longer. From
> investigation we see the current implementation for writing events doesn't
> look right:
> EventWriter# write()
> This flush is redundant and this statement should be removed. It defeats the
> purpose of having a separate flush function itself.
> Encoder.flush calls flush of the underlying output stream
> After patching with the fix the MR jobs could complete normally, please
> kindly find the patch in attached.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]