Ok, thanks. I have 1 worker process on each machine but I would like to run
my app on only 3 of them. Is it possible?

śr., 24.06.2015 o 11:44 użytkownik Evo Eftimov <evo.efti...@isecc.com>
napisał:

> There is no direct one to one mapping between Executor and Node
>
>
>
> Executor is simply the spark framework term for JVM instance with some
> spark framework system code running in it
>
>
>
> A node is a physical server machine
>
>
>
> You can have more than one JVM per node
>
>
>
> And vice versa you can have Nodes without any JVM running on them. How? BY
> specifying the number of executors to be less than the number of nodes
>
>
>
> So if you specify number of executors to be 1 and you have 5 nodes,  ONE
> executor will run on only one of them
>
>
>
> The above is valid for Spark on YARN
>
>
>
> For spark in standalone mode the number of executors is equal to the
> number of spark worker processes (daemons) running on each node
>
>
>
> *From:* Wojciech Pituła [mailto:w.pit...@gmail.com]
> *Sent:* Tuesday, June 23, 2015 12:38 PM
> *To:* user@spark.apache.org
> *Subject:* Spark Streaming: limit number of nodes
>
>
>
> I have set up small standalone cluster: 5 nodes, every node has 5GB of
> memory an 8 cores. As you can see, node doesn't have much RAM.
>
>
>
> I have 2 streaming apps, first one is configured to use 3GB of memory per
> node and second one uses 2GB per node.
>
>
>
> My problem is, that smaller app could easily run on 2 or 3 nodes, instead
> of 5 so I could lanuch third app.
>
>
>
> Is it possible to limit number of nodes(executors) that app wil get from
> standalone cluster?
>

Reply via email to