[ 
https://issues.apache.org/jira/browse/MAPREDUCE-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732826#action_12732826
 ] 

Hadoop QA commented on MAPREDUCE-548:
-------------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12413886/mapreduce-548-v3.patch
  against trunk revision 794942.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/408/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/408/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/408/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/408/console

This message is automatically generated.

> Global scheduling in the Fair Scheduler
> ---------------------------------------
>
>                 Key: MAPREDUCE-548
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-548
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>            Reporter: Matei Zaharia
>            Assignee: Matei Zaharia
>             Fix For: 0.21.0
>
>         Attachments: fs-global-v0.patch, hadoop-4667-v1.patch, 
> hadoop-4667-v1b.patch, hadoop-4667-v2.patch, HADOOP-4667_api.patch, 
> mapreduce-548-v1.patch, mapreduce-548-v2.patch, mapreduce-548-v3.patch, 
> mapreduce-548.patch
>
>
> The current schedulers in Hadoop all examine a single job on every heartbeat 
> when choosing which tasks to assign, choosing the job based on FIFO or fair 
> sharing. There are inherent limitations to this approach. For example, if the 
> job at the front of the queue is small (e.g. 10 maps, in a cluster of 100 
> nodes), then on average it will launch only one local map on the first 10 
> heartbeats while it is at the head of the queue. This leads to very poor 
> locality for small jobs. Instead, we need a more "global" view of scheduling 
> that can look at multiple jobs. To resolve the locality problem, we will use 
> the following algorithm:
> - If the job at the head of the queue has no node-local task to launch, skip 
> it and look through other jobs.
> - If a job has waited at least T1 seconds while being skipped, also allow it 
> to launch rack-local tasks.
> - If a job has waited at least T2 > T1 seconds, also allow it to launch 
> off-rack tasks.
> This algorithm improves locality while bounding the delay that any job 
> experiences in launching a task.
> It turns out that whether waiting is useful depends on how many tasks are 
> left in the job - the probability of getting a heartbeat from a node with a 
> local task - and on whether the job is CPU or IO bound. Thus there may be 
> logic for removing the wait on the last few tasks in the job.
> As a related issue, once we allow global scheduling, we can launch multiple 
> tasks per heartbeat, as in HADOOP-3136. The initial implementation of 
> HADOOP-3136 adversely affected performance because it only launched multiple 
> tasks from the same job, but with the wait rule above, we will only do this 
> for jobs that are allowed to launch non-local tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to