Try https://github.com/spark-jobserver/spark-jobserver
The server gets to hold the sparkContext (sc) and you can share it among
different jobs.
Think of it as Spark as a Service via a REST API

they are about to provide sharing of SQLContext
https://github.com/spark-jobserver/spark-jobserver/pull/32

On Wed, Jan 7, 2015 at 11:09 PM, Tim Chen <[email protected]> wrote:

> Hi John,
>
> I'm not quite familiar how SparkSQL thrift servers are started, but in
> general you can't share a Mesos driver with two different frameworks in
> Spark. Each spark shell or spark submit creates a new framework that is
> independently getting offers and using these resources from Mesos.
>
> If you want your executors to be long running, then you will want to run
> it in coarse grain mode which also keeps your cache as well.
>
> Tim
>
> On Tue, Jan 6, 2015 at 5:40 AM, John Omernik <[email protected]> wrote:
>
>> I have Spark 1.2 running nicely with both the SparkSQL thrift server
>> and running it in iPython.
>>
>> My question is this. I am running on Mesos in fine grained mode, what
>> is the appropriate way to manage the two instances? Should I run a
>> Course grained mode for the Spark SQL Thrift Server so that RDDs can
>> persist?  Should Run both as separate Spark instances in Fine Grain
>> Mode (I'ld have to change the port on one of them)  Is there a way to
>> have one spark driver server both things so I only use resources for
>> one driver?   How would you run this in a production environment?
>>
>> Thanks!
>>
>> John
>>
>
>

Reply via email to