[
https://issues.apache.org/jira/browse/HADOOP-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12468762
]
Doug Cutting commented on HADOOP-960:
-------------------------------------
> have the same number of records in each split
That's a very different policy. The base implementation does not open files,
only examines their lengths. I think adding this as a "knob" would result in
convoluted code. This sounds like a different splitting algorithm altogether,
not a modification of the existing one. So I'd suggest implementing a
different InputFormat that implements this. If you feel others might find it
useful, please contribute it.
> Incorrect number of map tasks when there are multiple input files
> -----------------------------------------------------------------
>
> Key: HADOOP-960
> URL: https://issues.apache.org/jira/browse/HADOOP-960
> Project: Hadoop
> Issue Type: Improvement
> Components: documentation
> Affects Versions: 0.10.1
> Reporter: Andrew McNabb
> Priority: Minor
>
> This problem happens with hadoop-streaming and possibly elsewhere. If there
> are 5 input files, it will create 130 map tasks, even if
> mapred.map.tasks=128. The number of map tasks is incorrectly set to a
> multiple of the number of files. (I wrote a much more complete bug report,
> but Jira lost it when it had an error, so I'm not in the mood to write it all
> again)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.