Github user skonto commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14650#discussion_r79789580
  
    --- Diff: 
mesos/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala
 ---
    @@ -18,23 +18,43 @@
     package org.apache.spark.deploy.mesos
     
     import scala.annotation.tailrec
    +import scala.collection.mutable
     
    -import org.apache.spark.SparkConf
     import org.apache.spark.util.{IntParam, Utils}
    -
    +import org.apache.spark.SparkConf
     
     private[mesos] class MesosClusterDispatcherArguments(args: Array[String], 
conf: SparkConf) {
    -  var host = Utils.localHostName()
    -  var port = 7077
    -  var name = "Spark Cluster"
    -  var webUiPort = 8081
    +  var host: String = Utils.localHostName()
    +  var port: Int = 7077
    +  var name: String = "Spark Cluster"
    +  var webUiPort: Int = 8081
    +  var verbose: Boolean = false
       var masterUrl: String = _
       var zookeeperUrl: Option[String] = None
       var propertiesFile: String = _
    +  val confProperties: mutable.HashMap[String, String] =
    +    new mutable.HashMap[String, String]()
     
       parse(args.toList)
     
    +  // scalastyle:on println
       propertiesFile = Utils.loadDefaultSparkProperties(conf, propertiesFile)
    +  Utils.updateSparkConfigFromProperties(conf, confProperties)
    --- End diff --
    
    Non mesos code does not support --conf or they dont care in all cases about 
spark. confs.
    Check here:
    
https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMasterArguments.scala
    and here:
    
https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L753
    
    If you compare the same functionality (the one about loading the properties 
file) with the MesosClusterDispatcherArguments file, things are completely 
different, before my PR.
    
    
https://github.com/apache/spark/blob/master/mesos/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala#L37
    
    So I dont expect parallel similarity since for example in this case 
ApplicationMaster does not need to get the spark configuration in a Spark 
Config.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to