Re: Spark Integration Patterns

2016-02-29 Thread Alexander Pivovarov
There is a spark-jobserver (SJS) which is REST interface for spark and
spark-sql
you can deploy your jar file with Jobs impl to spark-jobserver
and use rest API to submit jobs in synch or async mode
in sync mode you need to poll SJS to get job result
job result might be actual data in json or path on s3 / hdfs with the data

There is an instruction on how to start job-server on AWS EMR and submit
simple workdcount job using culr
https://github.com/spark-jobserver/spark-jobserver/blob/master/doc/EMR.md

On Mon, Feb 29, 2016 at 12:54 PM, skaarthik oss <skaarthik@gmail.com>
wrote:

> Check out http://toree.incubator.apache.org/. It might help with your
> need.
>
>
>
> *From:* moshir mikael [mailto:moshir.mik...@gmail.com]
> *Sent:* Monday, February 29, 2016 5:58 AM
> *To:* Alex Dzhagriev <dzh...@gmail.com>
> *Cc:* user <user@spark.apache.org>
> *Subject:* Re: Spark Integration Patterns
>
>
>
> Thanks, will check too, however : just want to use Spark core RDD and
> standard data sources.
>
>
>
> Le lun. 29 févr. 2016 à 14:54, Alex Dzhagriev <dzh...@gmail.com> a écrit :
>
> Hi Moshir,
>
>
>
> Regarding the streaming, you can take a look at the spark streaming, the
> micro-batching framework. If it satisfies your needs it has a bunch of
> integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.
>
>
>
> Cheers, Alex.
>
>
>
> On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael <moshir.mik...@gmail.com>
> wrote:
>
> Hi Alex,
>
> thanks for the link. Will check it.
>
> Does someone know of a more streamlined approach ?
>
>
>
>
>
>
>
> Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev <dzh...@gmail.com> a écrit :
>
> Hi Moshir,
>
>
>
> I think you can use the rest api provided with Spark:
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>
>
>
> Unfortunately, I haven't find any documentation, but it looks fine.
>
> Thanks, Alex.
>
>
>
> On Sun, Feb 28, 2016 at 3:25 PM, mms <moshir.mik...@gmail.com> wrote:
>
> Hi, I cannot find a simple example showing how a typical application can
> 'connect' to a remote spark cluster and interact with it. Let's say I have
> a Python web application hosted somewhere *outside *a spark cluster, with
> just python installed on it. How can I talk to Spark without using a
> notebook, or using ssh to connect to a cluster master node ? I know of
> spark-submit and spark-shell, however forking a process on a remote host to
> execute a shell script seems like a lot of effort What are the recommended
> ways to connect and query Spark from a remote client ? Thanks Thx !
> --
>
> View this message in context: Spark Integration Patterns
> <http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354.html>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>
>
>
>
>
>


RE: Spark Integration Patterns

2016-02-29 Thread skaarthik oss
Check out http://toree.incubator.apache.org/. It might help with your need.

 

From: moshir mikael [mailto:moshir.mik...@gmail.com] 
Sent: Monday, February 29, 2016 5:58 AM
To: Alex Dzhagriev <dzh...@gmail.com>
Cc: user <user@spark.apache.org>
Subject: Re: Spark Integration Patterns

 

Thanks, will check too, however : just want to use Spark core RDD and standard 
data sources.

 

Le lun. 29 févr. 2016 à 14:54, Alex Dzhagriev <dzh...@gmail.com 
<mailto:dzh...@gmail.com> > a écrit :

Hi Moshir,

 

Regarding the streaming, you can take a look at the spark streaming, the 
micro-batching framework. If it satisfies your needs it has a bunch of 
integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.

 

Cheers, Alex.

 

On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael <moshir.mik...@gmail.com 
<mailto:moshir.mik...@gmail.com> > wrote:

Hi Alex,

thanks for the link. Will check it.

Does someone know of a more streamlined approach ?

 

 

 

Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev <dzh...@gmail.com 
<mailto:dzh...@gmail.com> > a écrit :

Hi Moshir,

 

I think you can use the rest api provided with Spark: 
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala

 

Unfortunately, I haven't find any documentation, but it looks fine.

Thanks, Alex.

 

On Sun, Feb 28, 2016 at 3:25 PM, mms <moshir.mik...@gmail.com 
<mailto:moshir.mik...@gmail.com> > wrote:

Hi, I cannot find a simple example showing how a typical application can 
'connect' to a remote spark cluster and interact with it. Let's say I have a 
Python web application hosted somewhere outside a spark cluster, with just 
python installed on it. How can I talk to Spark without using a notebook, or 
using ssh to connect to a cluster master node ? I know of spark-submit and 
spark-shell, however forking a process on a remote host to execute a shell 
script seems like a lot of effort What are the recommended ways to connect and 
query Spark from a remote client ? Thanks Thx ! 

  _  

View this message in context: Spark Integration Patterns 
<http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354.html>
 
Sent from the Apache Spark User List mailing list archive 
<http://apache-spark-user-list.1001560.n3.nabble.com/>  at Nabble.com.

 

 



Re: Spark Integration Patterns

2016-02-29 Thread moshir mikael
Thanks, will check too, however : just want to use Spark core RDD and
standard data sources.


Le lun. 29 févr. 2016 à 14:54, Alex Dzhagriev  a écrit :

> Hi Moshir,
>
> Regarding the streaming, you can take a look at the spark streaming, the
> micro-batching framework. If it satisfies your needs it has a bunch of
> integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.
>
> Cheers, Alex.
>
> On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael 
> wrote:
>
>> Hi Alex,
>> thanks for the link. Will check it.
>> Does someone know of a more streamlined approach ?
>>
>>
>>
>>
>> Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev  a
>> écrit :
>>
>>> Hi Moshir,
>>>
>>> I think you can use the rest api provided with Spark:
>>> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>>>
>>> Unfortunately, I haven't find any documentation, but it looks fine.
>>> Thanks, Alex.
>>>
>>> On Sun, Feb 28, 2016 at 3:25 PM, mms  wrote:
>>>
 Hi, I cannot find a simple example showing how a typical application
 can 'connect' to a remote spark cluster and interact with it. Let's say I
 have a Python web application hosted somewhere *outside *a spark
 cluster, with just python installed on it. How can I talk to Spark without
 using a notebook, or using ssh to connect to a cluster master node ? I know
 of spark-submit and spark-shell, however forking a process on a remote host
 to execute a shell script seems like a lot of effort What are the
 recommended ways to connect and query Spark from a remote client ? Thanks
 Thx !
 --
 View this message in context: Spark Integration Patterns
 
 Sent from the Apache Spark User List mailing list archive
  at Nabble.com.

>>>
>>>
>


Re: Spark Integration Patterns

2016-02-29 Thread Alex Dzhagriev
Hi Moshir,

Regarding the streaming, you can take a look at the spark streaming, the
micro-batching framework. If it satisfies your needs it has a bunch of
integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.

Cheers, Alex.

On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael 
wrote:

> Hi Alex,
> thanks for the link. Will check it.
> Does someone know of a more streamlined approach ?
>
>
>
>
> Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev  a écrit :
>
>> Hi Moshir,
>>
>> I think you can use the rest api provided with Spark:
>> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>>
>> Unfortunately, I haven't find any documentation, but it looks fine.
>> Thanks, Alex.
>>
>> On Sun, Feb 28, 2016 at 3:25 PM, mms  wrote:
>>
>>> Hi, I cannot find a simple example showing how a typical application can
>>> 'connect' to a remote spark cluster and interact with it. Let's say I have
>>> a Python web application hosted somewhere *outside *a spark cluster,
>>> with just python installed on it. How can I talk to Spark without using a
>>> notebook, or using ssh to connect to a cluster master node ? I know of
>>> spark-submit and spark-shell, however forking a process on a remote host to
>>> execute a shell script seems like a lot of effort What are the recommended
>>> ways to connect and query Spark from a remote client ? Thanks Thx !
>>> --
>>> View this message in context: Spark Integration Patterns
>>> 
>>> Sent from the Apache Spark User List mailing list archive
>>>  at Nabble.com.
>>>
>>
>>


Re: Spark Integration Patterns

2016-02-29 Thread moshir mikael
Hi Alex,
thanks for the link. Will check it.
Does someone know of a more streamlined approach ?




Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev  a écrit :

> Hi Moshir,
>
> I think you can use the rest api provided with Spark:
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>
> Unfortunately, I haven't find any documentation, but it looks fine.
> Thanks, Alex.
>
> On Sun, Feb 28, 2016 at 3:25 PM, mms  wrote:
>
>> Hi, I cannot find a simple example showing how a typical application can
>> 'connect' to a remote spark cluster and interact with it. Let's say I have
>> a Python web application hosted somewhere *outside *a spark cluster,
>> with just python installed on it. How can I talk to Spark without using a
>> notebook, or using ssh to connect to a cluster master node ? I know of
>> spark-submit and spark-shell, however forking a process on a remote host to
>> execute a shell script seems like a lot of effort What are the recommended
>> ways to connect and query Spark from a remote client ? Thanks Thx !
>> --
>> View this message in context: Spark Integration Patterns
>> 
>> Sent from the Apache Spark User List mailing list archive
>>  at Nabble.com.
>>
>
>


Re: Spark Integration Patterns

2016-02-29 Thread Alex Dzhagriev
Hi Moshir,

I think you can use the rest api provided with Spark:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala

Unfortunately, I haven't find any documentation, but it looks fine.
Thanks, Alex.

On Sun, Feb 28, 2016 at 3:25 PM, mms  wrote:

> Hi, I cannot find a simple example showing how a typical application can
> 'connect' to a remote spark cluster and interact with it. Let's say I have
> a Python web application hosted somewhere *outside *a spark cluster, with
> just python installed on it. How can I talk to Spark without using a
> notebook, or using ssh to connect to a cluster master node ? I know of
> spark-submit and spark-shell, however forking a process on a remote host to
> execute a shell script seems like a lot of effort What are the recommended
> ways to connect and query Spark from a remote client ? Thanks Thx !
> --
> View this message in context: Spark Integration Patterns
> 
> Sent from the Apache Spark User List mailing list archive
>  at Nabble.com.
>


Re: Spark Integration Patterns

2016-02-29 Thread moshir mikael
Well,

I have a personal project where I want to build a *spreadsheet *on top of
spark.
I have a version of my app running on postgresql, which does not scale, and
would like to move data processing to spark.
You can import data, explore data, analyze data, visualize data ...
You don't need to be an advanced technical user to use it.
I believe it would be much easier to use spark than postgresql for this
kind of dynamic data exploration.
For instance, a formula can be achieved with a simple #.map statement.

However, I need some kind of connection to spark to iterate with RDD.
This is what makes me wonder of how you can talk to spark from a web app.










Le dim. 28 févr. 2016 à 23:36, ayan guha  a écrit :

> I believe you are looking  for something like Spark Jobserver for running
> jobs & JDBC server for accessing data? I am curious to know more about it,
> any further discussion will be very helpful
>
> On Mon, Feb 29, 2016 at 6:06 AM, Luciano Resende 
> wrote:
>
>> One option we have used in the past is to expose spark application
>> functionality via REST, this would enable python or any other client that
>> is capable of doing a HTTP request to integrate with your Spark application.
>>
>> To get you started, this might be a useful reference
>>
>>
>> http://blog.michaelhamrah.com/2013/06/scala-web-apis-up-and-running-with-spray-and-akka/
>>
>>
>> On Sun, Feb 28, 2016 at 10:38 AM, moshir mikael 
>> wrote:
>>
>>> Ok,
>>> but what do I need for the program to run.
>>> In python  sparkcontext  = SparkContext(conf) only works when you have
>>> spark installed locally.
>>> AFAIK there is no *pyspark *package for python that you can install
>>> doing pip install pyspark.
>>> You actually need to install spark to get it running (e.g :
>>> https://github.com/KristianHolsheimer/pyspark-setup-guide).
>>>
>>> Does it mean you need to install spark on the box your applications runs
>>> to benefit from pyspark and this is required to connect to another remote
>>> spark cluster ?
>>> Am I missing something obvious ?
>>>
>>>
>>> Le dim. 28 févr. 2016 à 19:01, Todd Nist  a écrit :
>>>
 Define your SparkConfig to set the master:

   val conf = new SparkConf().setAppName(AppName)
 .setMaster(SparkMaster)
 .set()

 Where SparkMaster = "spark://SparkServerHost:7077".  So if your spark
 server hostname it "RADTech" then it would be "spark://RADTech:7077".

 Then when you create the SparkContext, pass the SparkConf  to it:

 val sparkContext = new SparkContext(conf)

 Then use the sparkContext for interact with the SparkMaster / Cluster.
 Your program basically becomes the driver.

 HTH.

 -Todd

 On Sun, Feb 28, 2016 at 9:25 AM, mms  wrote:

> Hi, I cannot find a simple example showing how a typical application
> can 'connect' to a remote spark cluster and interact with it. Let's say I
> have a Python web application hosted somewhere *outside *a spark
> cluster, with just python installed on it. How can I talk to Spark without
> using a notebook, or using ssh to connect to a cluster master node ? I 
> know
> of spark-submit and spark-shell, however forking a process on a remote 
> host
> to execute a shell script seems like a lot of effort What are the
> recommended ways to connect and query Spark from a remote client ? Thanks
> Thx !
> --
> View this message in context: Spark Integration Patterns
> 
> Sent from the Apache Spark User List mailing list archive
>  at Nabble.com.
>


>>
>>
>> --
>> Luciano Resende
>> http://people.apache.org/~lresende
>> http://twitter.com/lresende1975
>> http://lresende.blogspot.com/
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>


Re: Spark Integration Patterns

2016-02-28 Thread Yashwanth Kumar
Hi, 
To connect to Spark from a remote location and submit jobs, you can try
Spark - Job Server.Its been open sourced now.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354p26357.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark Integration Patterns

2016-02-28 Thread ayan guha
I believe you are looking  for something like Spark Jobserver for running
jobs & JDBC server for accessing data? I am curious to know more about it,
any further discussion will be very helpful

On Mon, Feb 29, 2016 at 6:06 AM, Luciano Resende 
wrote:

> One option we have used in the past is to expose spark application
> functionality via REST, this would enable python or any other client that
> is capable of doing a HTTP request to integrate with your Spark application.
>
> To get you started, this might be a useful reference
>
>
> http://blog.michaelhamrah.com/2013/06/scala-web-apis-up-and-running-with-spray-and-akka/
>
>
> On Sun, Feb 28, 2016 at 10:38 AM, moshir mikael 
> wrote:
>
>> Ok,
>> but what do I need for the program to run.
>> In python  sparkcontext  = SparkContext(conf) only works when you have
>> spark installed locally.
>> AFAIK there is no *pyspark *package for python that you can install
>> doing pip install pyspark.
>> You actually need to install spark to get it running (e.g :
>> https://github.com/KristianHolsheimer/pyspark-setup-guide).
>>
>> Does it mean you need to install spark on the box your applications runs
>> to benefit from pyspark and this is required to connect to another remote
>> spark cluster ?
>> Am I missing something obvious ?
>>
>>
>> Le dim. 28 févr. 2016 à 19:01, Todd Nist  a écrit :
>>
>>> Define your SparkConfig to set the master:
>>>
>>>   val conf = new SparkConf().setAppName(AppName)
>>> .setMaster(SparkMaster)
>>> .set()
>>>
>>> Where SparkMaster = "spark://SparkServerHost:7077".  So if your spark
>>> server hostname it "RADTech" then it would be "spark://RADTech:7077".
>>>
>>> Then when you create the SparkContext, pass the SparkConf  to it:
>>>
>>> val sparkContext = new SparkContext(conf)
>>>
>>> Then use the sparkContext for interact with the SparkMaster / Cluster.
>>> Your program basically becomes the driver.
>>>
>>> HTH.
>>>
>>> -Todd
>>>
>>> On Sun, Feb 28, 2016 at 9:25 AM, mms  wrote:
>>>
 Hi, I cannot find a simple example showing how a typical application
 can 'connect' to a remote spark cluster and interact with it. Let's say I
 have a Python web application hosted somewhere *outside *a spark
 cluster, with just python installed on it. How can I talk to Spark without
 using a notebook, or using ssh to connect to a cluster master node ? I know
 of spark-submit and spark-shell, however forking a process on a remote host
 to execute a shell script seems like a lot of effort What are the
 recommended ways to connect and query Spark from a remote client ? Thanks
 Thx !
 --
 View this message in context: Spark Integration Patterns
 
 Sent from the Apache Spark User List mailing list archive
  at Nabble.com.

>>>
>>>
>
>
> --
> Luciano Resende
> http://people.apache.org/~lresende
> http://twitter.com/lresende1975
> http://lresende.blogspot.com/
>



-- 
Best Regards,
Ayan Guha


Re: Spark Integration Patterns

2016-02-28 Thread Luciano Resende
One option we have used in the past is to expose spark application
functionality via REST, this would enable python or any other client that
is capable of doing a HTTP request to integrate with your Spark application.

To get you started, this might be a useful reference

http://blog.michaelhamrah.com/2013/06/scala-web-apis-up-and-running-with-spray-and-akka/


On Sun, Feb 28, 2016 at 10:38 AM, moshir mikael 
wrote:

> Ok,
> but what do I need for the program to run.
> In python  sparkcontext  = SparkContext(conf) only works when you have
> spark installed locally.
> AFAIK there is no *pyspark *package for python that you can install doing
> pip install pyspark.
> You actually need to install spark to get it running (e.g :
> https://github.com/KristianHolsheimer/pyspark-setup-guide).
>
> Does it mean you need to install spark on the box your applications runs
> to benefit from pyspark and this is required to connect to another remote
> spark cluster ?
> Am I missing something obvious ?
>
>
> Le dim. 28 févr. 2016 à 19:01, Todd Nist  a écrit :
>
>> Define your SparkConfig to set the master:
>>
>>   val conf = new SparkConf().setAppName(AppName)
>> .setMaster(SparkMaster)
>> .set()
>>
>> Where SparkMaster = "spark://SparkServerHost:7077".  So if your spark
>> server hostname it "RADTech" then it would be "spark://RADTech:7077".
>>
>> Then when you create the SparkContext, pass the SparkConf  to it:
>>
>> val sparkContext = new SparkContext(conf)
>>
>> Then use the sparkContext for interact with the SparkMaster / Cluster.
>> Your program basically becomes the driver.
>>
>> HTH.
>>
>> -Todd
>>
>> On Sun, Feb 28, 2016 at 9:25 AM, mms  wrote:
>>
>>> Hi, I cannot find a simple example showing how a typical application can
>>> 'connect' to a remote spark cluster and interact with it. Let's say I have
>>> a Python web application hosted somewhere *outside *a spark cluster,
>>> with just python installed on it. How can I talk to Spark without using a
>>> notebook, or using ssh to connect to a cluster master node ? I know of
>>> spark-submit and spark-shell, however forking a process on a remote host to
>>> execute a shell script seems like a lot of effort What are the recommended
>>> ways to connect and query Spark from a remote client ? Thanks Thx !
>>> --
>>> View this message in context: Spark Integration Patterns
>>> 
>>> Sent from the Apache Spark User List mailing list archive
>>>  at Nabble.com.
>>>
>>
>>


-- 
Luciano Resende
http://people.apache.org/~lresende
http://twitter.com/lresende1975
http://lresende.blogspot.com/


Re: Spark Integration Patterns

2016-02-28 Thread Todd Nist
I'm not sure on Python, not expert in that area.  Based on pr,
https://github.com/apache/spark/pull/8318, I believe you are correct that
Spark would need to be installed for you to be able to currently leverage
the pyspark package.

On Sun, Feb 28, 2016 at 1:38 PM, moshir mikael 
wrote:

> Ok,
> but what do I need for the program to run.
> In python  sparkcontext  = SparkContext(conf) only works when you have
> spark installed locally.
> AFAIK there is no *pyspark *package for python that you can install doing
> pip install pyspark.
> You actually need to install spark to get it running (e.g :
> https://github.com/KristianHolsheimer/pyspark-setup-guide).
>
> Does it mean you need to install spark on the box your applications runs
> to benefit from pyspark and this is required to connect to another remote
> spark cluster ?
> Am I missing something obvious ?
>
>
> Le dim. 28 févr. 2016 à 19:01, Todd Nist  a écrit :
>
>> Define your SparkConfig to set the master:
>>
>>   val conf = new SparkConf().setAppName(AppName)
>> .setMaster(SparkMaster)
>> .set()
>>
>> Where SparkMaster = "spark://SparkServerHost:7077".  So if your spark
>> server hostname it "RADTech" then it would be "spark://RADTech:7077".
>>
>> Then when you create the SparkContext, pass the SparkConf  to it:
>>
>> val sparkContext = new SparkContext(conf)
>>
>> Then use the sparkContext for interact with the SparkMaster / Cluster.
>> Your program basically becomes the driver.
>>
>> HTH.
>>
>> -Todd
>>
>> On Sun, Feb 28, 2016 at 9:25 AM, mms  wrote:
>>
>>> Hi, I cannot find a simple example showing how a typical application can
>>> 'connect' to a remote spark cluster and interact with it. Let's say I have
>>> a Python web application hosted somewhere *outside *a spark cluster,
>>> with just python installed on it. How can I talk to Spark without using a
>>> notebook, or using ssh to connect to a cluster master node ? I know of
>>> spark-submit and spark-shell, however forking a process on a remote host to
>>> execute a shell script seems like a lot of effort What are the recommended
>>> ways to connect and query Spark from a remote client ? Thanks Thx !
>>> --
>>> View this message in context: Spark Integration Patterns
>>> 
>>> Sent from the Apache Spark User List mailing list archive
>>>  at Nabble.com.
>>>
>>
>>


Re: Spark Integration Patterns

2016-02-28 Thread Todd Nist
Define your SparkConfig to set the master:

  val conf = new SparkConf().setAppName(AppName)
.setMaster(SparkMaster)
.set()

Where SparkMaster = "spark://SparkServerHost:7077".  So if your spark
server hostname it "RADTech" then it would be "spark://RADTech:7077".

Then when you create the SparkContext, pass the SparkConf  to it:

val sparkContext = new SparkContext(conf)

Then use the sparkContext for interact with the SparkMaster / Cluster.
Your program basically becomes the driver.

HTH.

-Todd

On Sun, Feb 28, 2016 at 9:25 AM, mms  wrote:

> Hi, I cannot find a simple example showing how a typical application can
> 'connect' to a remote spark cluster and interact with it. Let's say I have
> a Python web application hosted somewhere *outside *a spark cluster, with
> just python installed on it. How can I talk to Spark without using a
> notebook, or using ssh to connect to a cluster master node ? I know of
> spark-submit and spark-shell, however forking a process on a remote host to
> execute a shell script seems like a lot of effort What are the recommended
> ways to connect and query Spark from a remote client ? Thanks Thx !
> --
> View this message in context: Spark Integration Patterns
> 
> Sent from the Apache Spark User List mailing list archive
>  at Nabble.com.
>