[ 
https://issues.apache.org/jira/browse/HADOOP-657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12600932#action_12600932
 ] 

Ari Rabkin commented on HADOOP-657:
-----------------------------------

Here's what we have currently.  ResourceEstimator keeps an estimate of how big 
the average map's output is.  As Map tasks complete, we update this.  If a node 
has less than twice the average outputsize in free disk space, we don't assign 
tasks to it.  Haven't implemented the percentile aspect; average is 
computationally much easier.

So if a job has 10 GB of input, split across ten map tasks, tasks will only be 
started on nodes with at least two gigabytes free. 

It's been tested locally, and indeed, jobs only go to a task tracker with 
sufficient space.  Next step is testing at scale, on a cluster, 

> Free temporary space should be modelled better
> ----------------------------------------------
>
>                 Key: HADOOP-657
>                 URL: https://issues.apache.org/jira/browse/HADOOP-657
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Owen O'Malley
>            Assignee: Ari Rabkin
>         Attachments: diskspaceest.patch
>
>
> Currently, there is a configurable size that must be free for a task tracker 
> to accept a new task. However, that isn't a very good model of what the task 
> is likely to take. I'd like to propose:
> Map tasks:  totalInputSize * conf.getFloat("map.output.growth.factor", 1.0) / 
> numMaps
> Reduce tasks: totalInputSize * 2 * conf.getFloat("map.output.growth.factor", 
> 1.0) / numReduces
> where totalInputSize is the size of all the maps inputs for the given job.
> To start a new task, 
>   newTaskAllocation + (sum over running tasks of (1.0 - done) * allocation) 
> >= 
>        free disk * conf.getFloat("mapred.max.scratch.allocation", 0.90);
> So in English, we will model the expected sizes of tasks and only task tasks 
> that should leave us a 10% margin. With:
> map.output.growth.factor -- the relative size of the transient data relative 
> to the map inputs
> mapred.max.scratch.allocation -- the maximum amount of our disk we want to 
> allocate to tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to