It works on Mesos, too. I'm not sure about Standalone mode.

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Mon, Jan 11, 2016 at 10:01 AM, Nick Peterson <nrpeter...@gmail.com>
wrote:

> My understanding is that dynamic allocation is only enabled for
> Spark-on-Yarn. Those settings likely have no impact in standalone mode.
>
> Nick
>
> On Mon, Jan 11, 2016, 5:10 AM Yiannis Gkoufas <johngou...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am exploring a bit the dynamic resource allocation provided by the
>> Standalone Cluster Mode and I was wondering whether this behavior I am
>> experiencing is expected.
>> In my configuration I have 3 slaves with 24 cores each.
>> I have in my spark-defaults.conf:
>>
>> spark.shuffle.service.enabled true
>> spark.dynamicAllocation.enabled true
>> spark.dynamicAllocation.minExecutors 1
>> spark.dynamicAllocation.maxExecutors 6
>> spark.executor.cores 4
>>
>> When I submit a first Job it takes up all of the 72 cores of the cluster.
>> When I submit the second Job while the first one is running I get the
>> error:
>>
>> Initial job has not accepted any resources; check your cluster UI to
>> ensure that workers are registered and have sufficient resources
>>
>> Is this the expected behavior?
>>
>> Thanks a lot
>>
>>
>>

Reply via email to