Yup, this will still be supported.

On Dec 18, 2013, at 12:40 PM, Gary Malouf <[email protected]> wrote:

> In 0.7.3, the way of installing spark on mesos was to unpack it into the same 
> directory across the cluster (I assume this includes the driver program).  We 
> automated this process in our Ansible templates and all is right with the 
> world.
> 
> In the current 0.8.0 release, the process has been changed implying that you 
> need to put the Spark tarball in Hadoop and set additional properties 
> 'spark.executor.uri' for the install to work.  
> 
> The install pattern from 0.7.3 seems to still work and I plan to continue 
> with this - will it be supported when 0.9.0 stable comes out?  The current 
> SNAPSHOT still supports the old behavior so just a general question.
> 
> Thanks,
> 
> Gary

Reply via email to