I have a really simple standalone cluster with one worker located on the
same machine as the master. Both master and worker launch OK with the
scripts provided in /conf. However when I run the spark shell with the
command: MASTER=.... ./spark-shell, my worker fails to launch. Here's a
section of the log output:

13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Executor
app-20131121063231-0000/4 removed: Command exited with code 1
13/11/21 06:32:34 INFO Client$ClientActor: Executor added:
app-20131121063231-0000/5 on worker-20131121063035-node0-link0-52768
(node0-link0:7077) with 2 cores
13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20131121063231-0000/5 on hostPort node0-link0:7077 with 2 cores, 512.0
MB RAM
13/11/21 06:32:34 INFO Client$ClientActor: Executor updated:
app-20131121063231-0000/5 is now RUNNING
13/11/21 06:32:34 INFO Client$ClientActor: Executor updated:
app-20131121063231-0000/5 is now FAILED (Command exited with code 1)
13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Executor
app-20131121063231-0000/5 removed: Command exited with code 1


Basically the executor on the worker keeps getting failed as soon as it is
launched.
Anybody have a solution?

thanks!
Umar

Reply via email to