Hi,It appears that spark is always attempting to use the driver's hostname to 
connect / broadcast. This is usually fine, except when the cluster doesn't have 
DNS configured. For example, in a vagrant cluster with a private network. The 
workers and masters, and the host (where the driver runs from) can all see each 
other by ip. I can also specify --conf "spark.driver.host=192.168.40.1", and 
that results in the workers being able to connect to the driver. However, when 
trying to broadcast anything, it's still trying to use the hostname of the 
host. Now, I can set up a host entry in etc/hosts, but was wondering if there's 
a way to not require the hassle. Is there any way I can force spark to always 
use ips and not hostnames?
Thanks,
Ashic.

                                          

Reply via email to