HeartSaVioR opened a new pull request #24163: [SPARK-26606][CORE] Handle driver 
options properly when submitting to standalone cluster mode via legacy Client
URL: https://github.com/apache/spark/pull/24163
 
 
   ## What changes were proposed in this pull request?
   
   This patch fixes the issue that ClientEndpoint in standalone cluster doesn't 
recognize about driver options which are passed to SparkConf instead of system 
properties. When `Client` is executed via cli they should be provided as system 
properties, but with `spark-submit` they can be provided as SparkConf.
   
   ## How was this patch tested?
   
   Manually tested via following steps:
   
   1) setup standalone cluster (launch master and worker via 
`./sbin/start-all.sh`) 
   
   2) submit one of example app with standalone cluster mode
   
   ```
   ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master 
"spark://localhost:7077" --conf "spark.driver.extraJavaOptions=-Dfoo=BAR" 
--deploy-mode "cluster" --num-executors 1 --driver-memory 512m 
--executor-memory 512m --executor-cores 1 examples/jars/spark-examples*.jar 10
   ```
   
   3) check whether `foo=BAR` is provided in system properties in Spark UI
   
   <img width="877" alt="Screen Shot 2019-03-21 at 8 18 04 AM" 
src="https://user-images.githubusercontent.com/1317309/54728501-97db1700-4bc1-11e9-89da-078445c71e9b.png";>
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to