Hey, thanks a lot for reporting this. Do you mind making a JIRA with
the details so we can track it?

- Patrick

On Wed, Jun 4, 2014 at 9:24 AM, Marek Wiewiorka
<marek.wiewio...@gmail.com> wrote:
> Exactly the same story - it used to work with 0.9.1 and does not work
> anymore with 1.0.0.
> I ran tests using spark-shell as well as my application(so tested turning on
> coarse mode via env variable and  SparkContext properties explicitly)
>
> M.
>
>
> 2014-06-04 18:12 GMT+02:00 ajatix <a...@sigmoidanalytics.com>:
>
>> I'm running a manually built cluster on EC2. I have mesos (0.18.2) and
>> hdfs
>> (2.0.0-cdh4.5.0) installed on all slaves (3) and masters (3). I have
>> spark-1.0.0 on one master and the executor file is on hdfs for the slaves.
>> Whenever I try to launch a spark application on the cluster, it starts a
>> task on each slave (i'm using default configs) and they start FAILING with
>> the error msg - 'Is spark installed on it?'
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-fails-if-mesos-coarse-set-to-true-tp6817p6945.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>

Reply via email to