Hi Ken,

This is unfortunately a limitation of spark-shell and the way it works on the 
standalone mode. spark-shell sets an environment variable, SPARK_HOME, which 
tells Spark where to find its code installed on the cluster. This means that 
the path on your laptop must be the same as on the cluster, which is not the 
case. I recommend one of two things:

1) Either run spark-shell from a cluster node, where it will have the right 
path. (In general it’s also better for performance to have it close to the 
cluster)

2) Or, edit the spark-shell script and re-export SPARK_HOME right before it 
runs the Java command (ugly but will probably work).

Hopefully we’ll fix this in a future release.

Matei

On Jan 23, 2014, at 6:16 PM, kyocum <kyo...@illumina.com> wrote:

> Trying to run spark-shell from my laptop to a master node in a cluster.  It
> appears that if you've installed spark at loc A on a cluster and in loc B on
> your local machine, the current app framework uses loc B as the path for
> start up.  Any way to config my way around this?  TIA
> 
> The shell connects to the master and can connect to the workers: 
> 
> Using Scala version 2.9.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_09)
> Initializing interpreter...
> Creating SparkContext...
> 14/01/23 18:07:36 INFO Slf4jEventHandler: Slf4jEventHandler started
> 14/01/23 18:07:36 INFO SparkEnv: Registering BlockManagerMaster
> 14/01/23 18:07:36 INFO DiskBlockManager: Created local directory at
> /var/folders/d1/9h0hs71d0h112s1tk6vcbx540000gp/T/spark-local-20140123180736-bcf9
> 14/01/23 18:07:36 INFO MemoryStore: MemoryStore started with capacity 647.7
> MB.
> 14/01/23 18:07:36 INFO ConnectionManager: Bound socket to port 59238 with id
> = ConnectionManagerId(ilmn-coe-2.net,59238)
> 14/01/23 18:07:36 INFO BlockManagerMaster: Trying to register BlockManager
> 14/01/23 18:07:36 INFO BlockManagerMasterActor$BlockManagerInfo: Registering
> block manager ilmn-coe-2.net:59238 with 647.7 MB RAM
> 14/01/23 18:07:36 INFO BlockManagerMaster: Registered BlockManager
> 14/01/23 18:07:36 INFO HttpBroadcast: Broadcast server started at
> http://10.12.195.116:59239
> 14/01/23 18:07:36 INFO SparkEnv: Registering MapOutputTracker
> 14/01/23 18:07:36 INFO HttpFileServer: HTTP File server directory is
> /var/folders/d1/9h0hs71d0h112s1tk6vcbx540000gp/T/spark-d683bfd4-333a-4610-be20-a7390aa8d0ba
> 14/01/23 18:07:36 INFO SparkUI: Started Spark Web UI at
> http://ilmn-coe-2.net:4040
> 14/01/23 18:07:36 INFO Client$ClientActor: Connecting to master
> spark://hnn05.net:7077...
> 2014-01-23 18:07:36.475 java[9793:6403] Unable to load realm info from
> SCDynamicStore
> Spark context available as sc.
> 14/01/23 18:07:36 INFO SparkDeploySchedulerBackend: Connected to Spark
> cluster with app ID app-20140123180739-0021
> 14/01/23 18:07:36 INFO Client$ClientActor: Executor added:
> app-20140123180739-0021/0 on worker-20140123164655-192.168.28.232-51898
> (192.168.28.232:7077) with 4 cores
> 14/01/23 18:07:36 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20140123180739-0021/0 on hostPort 192.168.28.232:7077 with 4 cores,
> 1024.0 MB RAM
> 14/01/23 18:07:36 INFO Client$ClientActor: Executor updated:
> app-20140123180739-0021/0 is now RUNNING
> 14/01/23 18:07:36 INFO Client$ClientActor: Executor updated:
> app-20140123180739-0021/0 is now FAILED (class java.io.IOException: Cannot
> run program
> "/Users/kyocum/spark/spark-0.8.1-incubating/bin/compute-classpath.sh" (in
> directory "."): java.io.IOException: error=2, No such file or directory)
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/executor-failed-cannot-find-compute-classpath-sh-tp859.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to