>
> According to the documentation, spark standalone currently only supports a
> FIFO scheduling system.


That's not 
true.<http://spark.incubator.apache.org/docs/latest/job-scheduling.html#fair-scheduler-pools>

[sorry for the prior misfire]



On Tue, Nov 19, 2013 at 7:30 AM, Mark Hamstra <[email protected]>wrote:

>
>
>
> On Tue, Nov 19, 2013 at 6:50 AM, Yadid Ayzenberg <[email protected]>wrote:
>
>>  Hi all,
>>
>> According to the documentation, spark standalone currently only supports
>> a FIFO scheduling system.
>> I understand its possible to limit the number of cores a job uses by
>> setting spark.cores.max.
>> When running a job, will spark try using the max number of cores on each
>> machine until it reaches the set limit, or will it do this round robin
>> style - utilize a single core on each machine -  if its already used a core
>> on all of the slaves and the limit has not been reached, spark will utilize
>> an additional core on each machine and so on.
>>
>> I think the latter make more sense, but I want to be sure that is the
>> case.
>>
>> Thanks,
>> Yadid
>>
>>
>

Reply via email to