Hey All, start-slaves.sh and stop-slaves.sh make use of SSH to connect to remote clusters. Are there alternative methods to do this without SSH?
For example using: ./bin/spark-class org.apache.spark.deploy.worker.Worker spark://IP:PORT is fine but there is no way to kill the Worker without using stop-slave(s).sh or using ps -ef and then kill. Are there alternatives available such as Hadoop's: hadoop-daemon.sh start|stop xyz? I noticed spark-daemon.sh exists but maybe we need to increase the documentation around it, for instance: Usage: spark-daemon.sh [--config <conf-dir>] (start|stop|status) <spark-command> <spark-instance-number> <args> what are the valid spark-commands? Can this be used to start and stop workers on the current node? Many thanks Devl