You can enable fair sharing of resources between jobs in Spark. See
http://spark.incubator.apache.org/docs/latest/job-scheduling.html


On Sun, Jan 26, 2014 at 8:25 PM, Sai Prasanna <[email protected]>wrote:

> Please someone throw some light into this.
>
> In lazy scheduling that spark had implemented, it is given that tasks are
> sorted when a slot becomes free at a node and the job with least tasks is
> scheduled first or made to wait maximum till time D if locality of data is
> not there. My question is, doesnt this result in jobs with large number of
> tasks getting starved?...If not, then how?
>
>
> On Fri, Jan 24, 2014 at 2:45 PM, Sai Prasanna <[email protected]>wrote:
>
>> Hi, please clarify this,
>>
>> In lazy scheduling that spark had implemented, it is given that tasks are
>> sorted when a slot becomes free at a node and the job with least tasks is
>> scheduled first or made to wait maximum till time D if locality of data is
>> not there. My question is, doesnt this result in jobs with large number of
>> tasks getting starved?...If not, then how?
>>
>>
>>
>>
>> --
>> *Sai Prasanna. AN*
>> *II M.Tech (CS), SSSIHL*
>>
>>
>> * Entire water in the ocean can never sink a ship, Unless it gets inside.
>> All the pressures of life can never hurt you, Unless you let them in.*
>>
>
>
>
> --
> *Sai Prasanna. AN*
> *II M.Tech (CS), SSSIHL*
>
>
> *Entire water in the ocean can never sink a ship, Unless it gets inside.
> All the pressures of life can never hurt you, Unless you let them in.*
>

Reply via email to