[ https://issues.apache.org/jira/browse/MAPREDUCE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649308#comment-16649308 ]
Peter Bacsko commented on MAPREDUCE-7132: ----------------------------------------- [~xiaochen] wow, can you set the policy on a file level? I thought it applies to a directory and to its subdirectories. But looks like this thing is more complicated then. I'm not sure checking every file is a good idea. That's too much. Then the easiest solution is really just to increase {{mapreduce.job.max.split.locations}} to 15. That should really work well. > Check erasure coding in JobSplitWriter to avoid warnings > -------------------------------------------------------- > > Key: MAPREDUCE-7132 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7132 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: client, mrv2 > Affects Versions: 3.1.1 > Reporter: Peter Bacsko > Assignee: Peter Bacsko > Priority: Major > Attachments: MAPREDUCE-7132-001.patch, MAPREDUCE-7132-002.patch, > MAPREDUCE-7132-003.patch, MAPREDUCE-7132-004.patch, MAPREDUCE-7132-005.patch > > > Currently, {{JobSplitWriter}} compares the number of hosts for a certain > block against a static value that comes from > {{mapreduce.job.max.split.locations}}. The default value of this property is > 10. > However, an EC shema like RS-10-4 requires at least 14 hosts. In this case, > 14 block locations will be returned and {{JobSplitWriter}} prints a warning, > which can confuse users. > A possible solution could check whether EC is enabled for a block and > increase this value dynamically if needed. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org