Hi Justine.

As I understand you are using Spark in standalone mode meaning that you
start your master and slaves/worker processes.

You can specify the number of works for each node in
$SPARK_HOME/conf/spark-env.sh file as below

# Options for the daemons used in the standalone deploy mode
export SPARK_WORKER_INSTANCES=3 ##, to set the number of worker processes
per node

And you specify the host for master and slaves in conf/slaves file

When you start start-master.sh and start-slaves.sh, you will see the worker
processes

Now if you have localhost in slaves file you will start worker processes in
your master node so to speak. There is nothing wrong with that as long as
your master node has resources for spark app.

Once you stared you will see something like below using jps commad:

21697 Worker
18242 Master
21496 Worker
21597 Worker

Where is your edge (where you are submitting your Spark app)?


HTH




Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 27 July 2016 at 18:19, Jestin Ma <jestinwith.a...@gmail.com> wrote:

> Hi, I'm doing performance testing and currently have 1 master node and 4
> worker nodes and am submitting in client mode from a 6th cluster node.
>
> I know we can have a master and worker on the same node. Speaking in terms
> of performance and practicality, is it possible/suggested to have another
> working running on either the 6th node or the master node?
>
> Thank you!
>
>

Reply via email to