I have an EC2 installation of Spark Standalone Master/Worker set up. The
two can talk to one another, and all ports are open in the security group
(just to make sure it isn't the port). When I run spark-shell on the master
node (setting it to --master spark://ip:7077) it runs everything correctly.
When I try to submit a job from my local machine however I get RPC timeout
errors. Does anyone know why this is or how to resolve it?

(cross posted at
http://stackoverflow.com/questions/36947811/application-submitted-to-remote-spark-from-local-pyspark-never-completes
)

Thanks!

Reply via email to