Use *spark.cores.max* to limit the CPU per job, then you can easily
accommodate your third job also.

Thanks
Best Regards

On Tue, Jun 23, 2015 at 5:07 PM, Wojciech Pituła <w.pit...@gmail.com> wrote:

> I have set up small standalone cluster: 5 nodes, every node has 5GB of
> memory an 8 cores. As you can see, node doesn't have much RAM.
>
> I have 2 streaming apps, first one is configured to use 3GB of memory per
> node and second one uses 2GB per node.
>
> My problem is, that smaller app could easily run on 2 or 3 nodes, instead
> of 5 so I could lanuch third app.
>
> Is it possible to limit number of nodes(executors) that app wil get from
> standalone cluster?
>

Reply via email to