[ 
https://issues.apache.org/jira/browse/HADOOP-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613663#action_12613663
 ] 

Hemanth Yamijala commented on HADOOP-3759:
------------------------------------------

One obvious impact is on the scheduling of tasks. If a memory intensive job 
happens to be the current job being considered, what would happen if a TT comes 
in with lesser than required amount of memory. Scheduling tasks of other jobs 
to this TT could lead to starvation of the memory intensive job. Not scheduling 
other jobs could lead to under utilization of the cluster. However with 
different fairness scheduling primitives (like user limits, etc) which are 
being discussed in HADOOP-3445 and HADOOP-3746, it might be possible that these 
situations can be handled. Some more details will need to be worked out for 
this aspect.

> Provide ability to run memory intensive jobs without affecting other running 
> tasks on the nodes
> -----------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3759
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3759
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Hemanth Yamijala
>             Fix For: 0.19.0
>
>
> In HADOOP-3581, we are discussing how to prevent memory intensive tasks from 
> affecting Hadoop daemons and other tasks running on a node. A related 
> requirement is that users be provided an ability to run jobs which are memory 
> intensive. The system must provide enough knobs to allow such jobs to be run 
> while still maintaining the requirements of HADOOP-3581.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to