If you intend to run Hadoop mapReduce and Spark on the same cluster 
concurrently, and you have enough memory on the jobtracker master, then you can 
run the Spark master (for standalone as Raymond mentions) on the same node . 
This is not necessary but more for convenience so you only have so ssh into one 
master (usually id put hive/shark server, spark master, etc on same node).—
Sent from Mailbox for iPhone

On Mon, Jan 20, 2014 at 8:14 PM, mharwida <[email protected]> wrote:

> Hi,
> Should the Spark Master run on the Hadoop Job Tracker node (and Spark
> workers on Task Trackers) or the placement of the Spark Master could reside
> on any Hadoop node?
> Thanks
> Majd
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Master-on-Hadoop-Job-Tracker-tp680.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to