gt; Check out http://toree.incubator.apache.org/. It might help with your
> need.
>
>
>
> *From:* moshir mikael [mailto:moshir.mik...@gmail.com]
> *Sent:* Monday, February 29, 2016 5:58 AM
> *To:* Alex Dzhagriev <dzh...@gmail.com>
> *Cc:* user <user@spark.apac
Check out http://toree.incubator.apache.org/. It might help with your need.
From: moshir mikael [mailto:moshir.mik...@gmail.com]
Sent: Monday, February 29, 2016 5:58 AM
To: Alex Dzhagriev <dzh...@gmail.com>
Cc: user <user@spark.apache.org>
Subject: Re: Spark Integration Pattern
Thanks, will check too, however : just want to use Spark core RDD and
standard data sources.
Le lun. 29 févr. 2016 à 14:54, Alex Dzhagriev a écrit :
> Hi Moshir,
>
> Regarding the streaming, you can take a look at the spark streaming, the
> micro-batching framework. If it
Hi Moshir,
Regarding the streaming, you can take a look at the spark streaming, the
micro-batching framework. If it satisfies your needs it has a bunch of
integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.
Cheers, Alex.
On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael
Hi Alex,
thanks for the link. Will check it.
Does someone know of a more streamlined approach ?
Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev a écrit :
> Hi Moshir,
>
> I think you can use the rest api provided with Spark:
>
Hi Moshir,
I think you can use the rest api provided with Spark:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
Unfortunately, I haven't find any documentation, but it looks fine.
Thanks, Alex.
On Sun, Feb 28, 2016 at 3:25
Well,
I have a personal project where I want to build a *spreadsheet *on top of
spark.
I have a version of my app running on postgresql, which does not scale, and
would like to move data processing to spark.
You can import data, explore data, analyze data, visualize data ...
You don't need to be
Hi,
To connect to Spark from a remote location and submit jobs, you can try
Spark - Job Server.Its been open sourced now.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354p26357.html
Sent from the Apache Spark User List
I believe you are looking for something like Spark Jobserver for running
jobs & JDBC server for accessing data? I am curious to know more about it,
any further discussion will be very helpful
On Mon, Feb 29, 2016 at 6:06 AM, Luciano Resende
wrote:
> One option we have
One option we have used in the past is to expose spark application
functionality via REST, this would enable python or any other client that
is capable of doing a HTTP request to integrate with your Spark application.
To get you started, this might be a useful reference
I'm not sure on Python, not expert in that area. Based on pr,
https://github.com/apache/spark/pull/8318, I believe you are correct that
Spark would need to be installed for you to be able to currently leverage
the pyspark package.
On Sun, Feb 28, 2016 at 1:38 PM, moshir mikael
Define your SparkConfig to set the master:
val conf = new SparkConf().setAppName(AppName)
.setMaster(SparkMaster)
.set()
Where SparkMaster = "spark://SparkServerHost:7077". So if your spark
server hostname it "RADTech" then it would be "spark://RADTech:7077".
Then when you create
12 matches
Mail list logo