Hi Guys !

if i specify bindAddress in the spark-defaults.conf then for YARN (client
mode) everything works fine and the applicationMaster finds the driver. But
if i submit cluster mode then the applicationMaster, if hosted on worker
nodes, won't find the driver and results in bind error.



Any idea what's the missing config ?


To note that i create the driver through a SparkSession object (not a
SparkContext).

Hint i was thinking a propagation of the driver config to the worker would
solve this e.g. through spark.yarn.dist.files

Any suggestions here ?

Reply via email to