tgravescs commented on a change in pull request #25668: [SPARK-28884][Core] 
Default number of core for yarn mode
URL: https://github.com/apache/spark/pull/25668#discussion_r321370578
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/SparkContext.scala
 ##########
 @@ -2714,7 +2714,7 @@ object SparkContext extends Logging {
       case SparkMasterRegex.LOCAL_N_FAILURES_REGEX(threads, _) => 
convertToInt(threads)
       case "yarn" =>
         if (conf != null && conf.get(SUBMIT_DEPLOY_MODE) == "cluster") {
-          conf.getInt(DRIVER_CORES.key, 0)
+          conf.getInt(DRIVER_CORES.key, 1)
 
 Review comment:
   really I think this should be turned into conf.get(DRIVER_CORES) which has 
the right default.
   
   DRIVER_CORES = ConfigBuilder("spark.driver.cores")
       .doc("Number of cores to use for the driver process, only in cluster 
mode.")
       .intConf
       .createWithDefault(1)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to