Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3112#issuecomment-62507240
In Spark Standalone `cluster` deploy mode, I think that `master` will still
point to a regular standalone master URL (e.g. `spark://my-master-hostname`)
but the driver will be launched inside of a worker. To fix this case, I think
we'd have to add code on the submitter to strip out / ignore the
`spark.driver.hostname` property so the submitter's value isn't used on the
remote machine that hosts the driver.
If we do the ignoring inside of SparkContext itself, then this might cause
problems later down the road if the remote container wants to inject some
machine-specific values for these settings. Instead, I think that the right
way to fix this is to stop the inappropriate sending of machine-local
properties to remote machines. Essentially, I think this fix should go into
SparkSubmit rather than SparkContext.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]