Have you looked at:

https://spark.apache.org/docs/latest/spark-standalone.html

On Thu, Jun 23, 2016 at 12:28 PM, avendaon <jnan...@sharcnet.ca> wrote:

> Hi all,
>
> I have a cluster that has multiple nodes, and the data partition is
> unified,
> therefore all my nodes in my computer can access to the data I am working
> on. Right now, I run Spark in a single node, and it work beautifully.
>
> My question is, Is it possible to run Spark using multiple compute nodes
> (as
> a standalone mode, I don't have HDFS/Hadoop installed)? If so, what do I
> have to add/change to my Spark version or Spark script (either python or
> scala)?
>
> Thanks,
>
> Jose
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Multiple-compute-nodes-in-standalone-mode-tp27218.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to