I'm running Spark on YARN, will be upgrading to 1.3 soon.  

For the integration, will I need to install Pandas and scikit-learn on every
node in my cluster, or is the integration just something that takes place on
the edge node after a collect in yarn-client mode?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/What-is-needed-to-integrate-Spark-with-Pandas-and-scikit-learn-tp23410.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to