Re: Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Victor Tso-Guillen
Ah, thanks.


On Tue, Aug 26, 2014 at 7:32 PM, Nan Zhu  wrote:

> Hi, Victor,
>
> the issue for you to have different version in driver and cluster is that
> you the master will shutdown your application due to the inconsistent
> SerialVersionID in ExecutorState
>
> Best,
>
> --
> Nan Zhu
>
> On Tuesday, August 26, 2014 at 10:10 PM, Matei Zaharia wrote:
>
> Things will definitely compile, and apps compiled on 1.0.0 should even be
> able to link against 1.0.2 without recompiling. The only problem is if you
> run your driver with 1.0.0 on its classpath, but the cluster has 1.0.2 in
> executors.
>
> For Mesos and YARN vs standalone, the difference is that they just have
> more features, at the expense of more complicated setup. For example, they
> have richer support for cross-application sharing (see
> https://spark.apache.org/docs/latest/job-scheduling.html), and the
> ability to run non-Spark applications on the same cluster.
>
> Matei
>
> On August 26, 2014 at 6:53:33 PM, Victor Tso-Guillen (v...@paxata.com)
> wrote:
>
> Yes, we are standalone right now. Do you have literature why one would
> want to consider Mesos or YARN for Spark deployments?
>
> Sounds like I should try upgrading my project and seeing if everything
> compiles without modification. Then I can connect to an existing 1.0.0
> cluster and see what what happens...
>
> Thanks, Matei :)
>
>
> On Tue, Aug 26, 2014 at 6:37 PM, Matei Zaharia 
> wrote:
>
>  Is this a standalone mode cluster? We don't currently make this
> guarantee, though it will likely work in 1.0.0 to 1.0.2. The problem though
> is that the standalone mode grabs the executors' version of Spark code from
> what's installed on the cluster, while your driver might be built against
> another version. On YARN and Mesos, you can more easily mix different
> versions of Spark, since each application ships its own Spark JAR (or
> references one from a URL), and this is used for both the driver and
> executors.
>
>  Matei
>
> On August 26, 2014 at 6:10:57 PM, Victor Tso-Guillen (v...@paxata.com)
> wrote:
>
>  I wanted to make sure that there's full compatibility between minor
> releases. I have a project that has a dependency on spark-core so that it
> can be a driver program and that I can test locally. However, when
> connecting to a cluster you don't necessarily know what version you're
> connecting to. Is a 1.0.0 cluster binary compatible with a 1.0.2 driver
> program? Is a 1.0.0 driver program binary compatible with a 1.0.2 cluster?
>
>
>
>


Re: Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Nan Zhu
Hi, Victor, 

the issue for you to have different version in driver and cluster is that you 
the master will shutdown your application due to the inconsistent 
SerialVersionID in ExecutorState

Best, 

-- 
Nan Zhu


On Tuesday, August 26, 2014 at 10:10 PM, Matei Zaharia wrote:

> Things will definitely compile, and apps compiled on 1.0.0 should even be 
> able to link against 1.0.2 without recompiling. The only problem is if you 
> run your driver with 1.0.0 on its classpath, but the cluster has 1.0.2 in 
> executors.
> 
> For Mesos and YARN vs standalone, the difference is that they just have more 
> features, at the expense of more complicated setup. For example, they have 
> richer support for cross-application sharing (see 
> https://spark.apache.org/docs/latest/job-scheduling.html), and the ability to 
> run non-Spark applications on the same cluster.
> 
> Matei 
> 
> On August 26, 2014 at 6:53:33 PM, Victor Tso-Guillen (v...@paxata.com 
> (mailto:v...@paxata.com)) wrote:
> 
> > Yes, we are standalone right now. Do you have literature why one would want 
> > to consider Mesos or YARN for Spark deployments? 
> > 
> > Sounds like I should try upgrading my project and seeing if everything 
> > compiles without modification. Then I can connect to an existing 1.0.0 
> > cluster and see what what happens... 
> > 
> > Thanks, Matei :) 
> > 
> > 
> > On Tue, Aug 26, 2014 at 6:37 PM, Matei Zaharia  > (mailto:matei.zaha...@gmail.com)> wrote:
> > > Is this a standalone mode cluster? We don't currently make this 
> > > guarantee, though it will likely work in 1.0.0 to 1.0.2. The problem 
> > > though is that the standalone mode grabs the executors' version of Spark 
> > > code from what's installed on the cluster, while your driver might be 
> > > built against another version. On YARN and Mesos, you can more easily mix 
> > > different versions of Spark, since each application ships its own Spark 
> > > JAR (or references one from a URL), and this is used for both the driver 
> > > and executors. 
> > > 
> > > Matei 
> > > 
> > > 
> > > On August 26, 2014 at 6:10:57 PM, Victor Tso-Guillen (v...@paxata.com 
> > > (mailto:v...@paxata.com)) wrote:
> > > 
> > > > I wanted to make sure that there's full compatibility between minor 
> > > > releases. I have a project that has a dependency on spark-core so that 
> > > > it can be a driver program and that I can test locally. However, when 
> > > > connecting to a cluster you don't necessarily know what version you're 
> > > > connecting to. Is a 1.0.0 cluster binary compatible with a 1.0.2 driver 
> > > > program? Is a 1.0.0 driver program binary compatible with a 1.0.2 
> > > > cluster?
> > > > 
> > > > 
> > > > 
> > > 
> > > 
> > > 
> > > 
> > 
> > 
> > 



Re: Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Matei Zaharia
Things will definitely compile, and apps compiled on 1.0.0 should even be able 
to link against 1.0.2 without recompiling. The only problem is if you run your 
driver with 1.0.0 on its classpath, but the cluster has 1.0.2 in executors.

For Mesos and YARN vs standalone, the difference is that they just have more 
features, at the expense of more complicated setup. For example, they have 
richer support for cross-application sharing (seeĀ 
https://spark.apache.org/docs/latest/job-scheduling.html), and the ability to 
run non-Spark applications on the same cluster.

Matei

On August 26, 2014 at 6:53:33 PM, Victor Tso-Guillen (v...@paxata.com) wrote:

Yes, we are standalone right now. Do you have literature why one would want to 
consider Mesos or YARN for Spark deployments?

Sounds like I should try upgrading my project and seeing if everything compiles 
without modification. Then I can connect to an existing 1.0.0 cluster and see 
what what happens...

Thanks, Matei :)


On Tue, Aug 26, 2014 at 6:37 PM, Matei Zaharia  wrote:
Is this a standalone mode cluster? We don't currently make this guarantee, 
though it will likely work in 1.0.0 to 1.0.2. The problem though is that the 
standalone mode grabs the executors' version of Spark code from what's 
installed on the cluster, while your driver might be built against another 
version. On YARN and Mesos, you can more easily mix different versions of 
Spark, since each application ships its own Spark JAR (or references one from a 
URL), and this is used for both the driver and executors.

Matei

On August 26, 2014 at 6:10:57 PM, Victor Tso-Guillen (v...@paxata.com) wrote:

I wanted to make sure that there's full compatibility between minor releases. I 
have a project that has a dependency on spark-core so that it can be a driver 
program and that I can test locally. However, when connecting to a cluster you 
don't necessarily know what version you're connecting to. Is a 1.0.0 cluster 
binary compatible with a 1.0.2 driver program? Is a 1.0.0 driver program binary 
compatible with a 1.0.2 cluster?



Re: Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Victor Tso-Guillen
Yes, we are standalone right now. Do you have literature why one would want
to consider Mesos or YARN for Spark deployments?

Sounds like I should try upgrading my project and seeing if everything
compiles without modification. Then I can connect to an existing 1.0.0
cluster and see what what happens...

Thanks, Matei :)


On Tue, Aug 26, 2014 at 6:37 PM, Matei Zaharia 
wrote:

> Is this a standalone mode cluster? We don't currently make this guarantee,
> though it will likely work in 1.0.0 to 1.0.2. The problem though is that
> the standalone mode grabs the executors' version of Spark code from what's
> installed on the cluster, while your driver might be built against another
> version. On YARN and Mesos, you can more easily mix different versions of
> Spark, since each application ships its own Spark JAR (or references one
> from a URL), and this is used for both the driver and executors.
>
> Matei
>
> On August 26, 2014 at 6:10:57 PM, Victor Tso-Guillen (v...@paxata.com)
> wrote:
>
>  I wanted to make sure that there's full compatibility between minor
> releases. I have a project that has a dependency on spark-core so that it
> can be a driver program and that I can test locally. However, when
> connecting to a cluster you don't necessarily know what version you're
> connecting to. Is a 1.0.0 cluster binary compatible with a 1.0.2 driver
> program? Is a 1.0.0 driver program binary compatible with a 1.0.2 cluster?
>
>


Re: Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Matei Zaharia
Is this a standalone mode cluster? We don't currently make this guarantee, 
though it will likely work in 1.0.0 to 1.0.2. The problem though is that the 
standalone mode grabs the executors' version of Spark code from what's 
installed on the cluster, while your driver might be built against another 
version. On YARN and Mesos, you can more easily mix different versions of 
Spark, since each application ships its own Spark JAR (or references one from a 
URL), and this is used for both the driver and executors.

Matei

On August 26, 2014 at 6:10:57 PM, Victor Tso-Guillen (v...@paxata.com) wrote:

I wanted to make sure that there's full compatibility between minor releases. I 
have a project that has a dependency on spark-core so that it can be a driver 
program and that I can test locally. However, when connecting to a cluster you 
don't necessarily know what version you're connecting to. Is a 1.0.0 cluster 
binary compatible with a 1.0.2 driver program? Is a 1.0.0 driver program binary 
compatible with a 1.0.2 cluster?

Upgrading 1.0.0 to 1.0.2

2014-08-26 Thread Victor Tso-Guillen
I wanted to make sure that there's full compatibility between minor
releases. I have a project that has a dependency on spark-core so that it
can be a driver program and that I can test locally. However, when
connecting to a cluster you don't necessarily know what version you're
connecting to. Is a 1.0.0 cluster binary compatible with a 1.0.2 driver
program? Is a 1.0.0 driver program binary compatible with a 1.0.2 cluster?