Github user skonto commented on a diff in the pull request:
    --- Diff: 
    @@ -18,23 +18,43 @@
     package org.apache.spark.deploy.mesos
     import scala.annotation.tailrec
    +import scala.collection.mutable
    -import org.apache.spark.SparkConf
     import org.apache.spark.util.{IntParam, Utils}
    +import org.apache.spark.SparkConf
     private[mesos] class MesosClusterDispatcherArguments(args: Array[String], 
conf: SparkConf) {
    -  var host = Utils.localHostName()
    -  var port = 7077
    -  var name = "Spark Cluster"
    -  var webUiPort = 8081
    +  var host: String = Utils.localHostName()
    +  var port: Int = 7077
    +  var name: String = "Spark Cluster"
    +  var webUiPort: Int = 8081
    +  var verbose: Boolean = false
       var masterUrl: String = _
       var zookeeperUrl: Option[String] = None
       var propertiesFile: String = _
    +  val confProperties: mutable.HashMap[String, String] =
    +    new mutable.HashMap[String, String]()
    +  // scalastyle:on println
       propertiesFile = Utils.loadDefaultSparkProperties(conf, propertiesFile)
    +  Utils.updateSparkConfigFromProperties(conf, confProperties)
    --- End diff --
    Non mesos code does not support --conf or they dont care in all cases about 
spark. confs.
    Check here:
    and here:
    If you compare the same functionality (the one about loading the properties 
file) with the MesosClusterDispatcherArguments file, things are completely 
different, before my PR.

    So I dont expect parallel similarity since for example in this case 
ApplicationMaster does not need to get the spark configuration in a Spark 

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to