Hi, All In Spark the "spark.driver.host" is driver hostname in default, thus, akka actor system will listen to a URL like akka.tcp://hostname:port. However, when a user tries to use spark-submit to run application, the user may set "--master spark://192.168.1.12:7077".
Then, the *AppClient* in *SparkDeploySchedulerBackend* cannot successfully register to the Master, and the console prints: "WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory" I think we need to improve this by making akka recognises both hostname and the corresponding IP. Or at least add lines in Spark document to limit user from using IP. Any comments? Regards, Wang Hao(王灏) CloudTeam | School of Software Engineering Shanghai Jiao Tong University Address:800 Dongchuan Road, Minhang District, Shanghai, 200240 Email:wh.s...@gmail.com