Yes, using the new Spark structured streaming you can keep submitting streaming jobs against the same SparkContext in different requests (or you can create a new SparkContext if required in a request). The SparkJob implementation will get handle of the SparkContext which will be existing one or new one depending on the REST API calls -- see its github page for details on transient vs persistent SparkContexts.
With the old Spark streaming model, you cannot add new DStreams once StreamingContext has started (which has been a limitation of the old streaming model), so you can submit against the same context but only until one last job starts the StreamingContext.

regards
sumedh

On Monday 24 July 2017 06:09 AM, kant kodali wrote:
@Sumedh Can I run streaming jobs on the same context with spark-jobserver ? so there is no waiting for results since the spark sql job is expected stream forever and results of each streaming job are captured through a message queue.

In my case each spark sql query will be a streaming job.

On Sat, Jul 22, 2017 at 6:19 AM, Sumedh Wale <sw...@snappydata.io> wrote:
On Saturday 22 July 2017 01:31 PM, kant kodali wrote:
Is there a way to run Spark SQL through REST?

There is spark-jobserver (https://github.com/spark-jobserver/spark-jobserver). It does more than just REST API (like long running SparkContext).

regards

--
Sumedh Wale
SnappyData (http://www.snappydata.io)



--------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to