Hi: I am setting up a Spark 0.9.0 cluster over multiple hosts using Docker. I use a combination of /etc/hosts editing and port mapping to handle correct routing between Spark Master and Worker containers. My issue arises when I try to do any operation involving a textFile (hdfs or local) in the Spark cluster from a Spark Shell (in another container):
14/03/25 18:56:00 WARN TaskSetManager: Lost TID 1 (task 0.0:1) 14/03/25 18:56:01 WARN TaskSetManager: Loss was due to java.net.NoRouteToHostException java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) I currently set the spark.driver.port and spark.driver.host properties for my Spark Driver, however, the fileserver and httpBroadcast server are given random ports. I am currently not mapping these last two ports, thus are inaccessible in cluster. Is there a way to also make these static ports without having to modify source? All the best, Gui