I'm running a manually built cluster on EC2. I have mesos (0.18.2) and hdfs
(2.0.0-cdh4.5.0) installed on all slaves (3) and masters (3). I have
spark-1.0.0 on one master and the executor file is on hdfs for the slaves. 
Whenever I try to launch a spark application on the cluster, it starts a
task on each slave (i'm using default configs) and they start FAILING with
the error msg - 'Is spark installed on it?'



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-fails-if-mesos-coarse-set-to-true-tp6817p6945.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to