Supporting multiple version of Spark (e.g. 1.6 and 2.1) for batch jobs is
easy. But supporting multiple Spark versions for *interactive sessions*
needs major changes in Livy (and possibly in Spark). The main reason is
that, for batch jobs, purely user application code runs on the Spark/Spark
cluster. But for interactive sessions, parts of the Livy code run on the
Spark/YARN cluster. If Livy is compiled against a particular major version
of Spark (say 2.1.0), it cannot run interactive sessions on a different
Spark version (say 1.6). I am interested to know how we can get around this
restriction.

Thanks,
Meisam

On Fri, Mar 9, 2018 at 9:50 AM Marcelo Vanzin <van...@cloudera.com> wrote:

> On Fri, Mar 9, 2018 at 1:36 AM, Matteo Durighetto
> <m.durighe...@miriade.it> wrote:
> >           I think it's correct that the Livy Admin manages the multiple
> > version of spark, but the user need to choose what version to use
> > to submit the job.
> ...
> > So a Livy Admin could manage the configuration and a Datascience or a Dev
> > could submit the job calling the "alias" ( i.e. spak_1.6 or spark_2.1 or
> > spark 2.2 ) to a different spark / java
> > and test different env for they applications or machine learning project.
>
> That sounds closer to what I had in mind originally. User asks for a
> specific version of Spark using a name defined by the admin, instead
> of providing an explicit SPARK_HOME env variable or something like
> that.
>
>
> --
> Marcelo
>

Reply via email to