Hi Moshir,

Regarding the streaming, you can take a look at the spark streaming, the
micro-batching framework. If it satisfies your needs it has a bunch of
integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.

Cheers, Alex.

On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael <moshir.mik...@gmail.com>
wrote:

> Hi Alex,
> thanks for the link. Will check it.
> Does someone know of a more streamlined approach ?
>
>
>
>
> Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev <dzh...@gmail.com> a écrit :
>
>> Hi Moshir,
>>
>> I think you can use the rest api provided with Spark:
>> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>>
>> Unfortunately, I haven't find any documentation, but it looks fine.
>> Thanks, Alex.
>>
>> On Sun, Feb 28, 2016 at 3:25 PM, mms <moshir.mik...@gmail.com> wrote:
>>
>>> Hi, I cannot find a simple example showing how a typical application can
>>> 'connect' to a remote spark cluster and interact with it. Let's say I have
>>> a Python web application hosted somewhere *outside *a spark cluster,
>>> with just python installed on it. How can I talk to Spark without using a
>>> notebook, or using ssh to connect to a cluster master node ? I know of
>>> spark-submit and spark-shell, however forking a process on a remote host to
>>> execute a shell script seems like a lot of effort What are the recommended
>>> ways to connect and query Spark from a remote client ? Thanks Thx !
>>> ------------------------------
>>> View this message in context: Spark Integration Patterns
>>> <http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354.html>
>>> Sent from the Apache Spark User List mailing list archive
>>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>>>
>>
>>

Reply via email to