Hello,

I have an HA enabled YARN cluster with two resource mangers.  When submitting 
jobs via “spark-submit —master yarn-cluster”.  It appears that the driver is 
looking explicitly for the "yarn.resourcemanager.address” property rather than 
round robin-ing through the resource managers via the 
“yarn.client.failover-proxy-provider” property set to 
“org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider”

If I explicitly set the “yarn.resourcemanager.address” to the active resource 
manager, jobs will submit fine. 

Is there a manner to set “spark-submit —master yarn-cluster” to respect the 
failover proxy?

Thanks in advance,
Matt
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to