[
https://issues.apache.org/jira/browse/HADOOP-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12632591#action_12632591
]
Amar Kamat commented on HADOOP-4189:
------------------------------------
Arun,
I was not sure as to what the optimal values are. If we use default values from
FileSystem then the amount of data recovered is very low. For long running jobs
with less maps, these values should be low so that every update gets logged
while for long running jobs with large maps, it is ok to use system defaults as
the frequency of updates are high. So instead of using some heuristics and
making it hardcoded, I thought its better to keep it (admin)configurable and
then come up with some better heuristics or technique to set the defaults. Any
thought on what the default values should be considering various types of jobs
that can be run?
> HADOOP-3245 is incomplete
> -------------------------
>
> Key: HADOOP-4189
> URL: https://issues.apache.org/jira/browse/HADOOP-4189
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.19.0
> Reporter: Amar Kamat
> Assignee: Amar Kamat
> Priority: Blocker
> Fix For: 0.19.0
>
> Attachments: HADOOP-4189-v1.patch, HADOOP-4189-v2.patch,
> HADOOP-4189-v3.1.patch, HADOOP-4189-v3.patch, HADOOP-4189.patch
>
>
> There are 2 issues with HADOOP-3245
> - The default block size for the history files in hadoop-default.conf is set
> to 0. This file will be empty throughout. It should be made null.
> - Same goes for the buffer size.
> - InterTrackerProtocol version needs to be bumped.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.