Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21092#discussion_r186244449
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
 ---
    @@ -88,15 +94,22 @@ private[spark] class BasicDriverFeatureStep(
             .addToRequests("memory", driverMemoryQuantity)
             .addToLimits("memory", driverMemoryQuantity)
             .endResources()
    -      .addToArgs("driver")
    +      .addToArgs(driverDockerContainer)
           .addToArgs("--properties-file", SPARK_CONF_PATH)
           .addToArgs("--class", conf.roleSpecificConf.mainClass)
    -      // The user application jar is merged into the spark.jars list and 
managed through that
    -      // property, so there is no need to reference it explicitly here.
    -      .addToArgs(SparkLauncher.NO_RESOURCE)
    -      .addToArgs(conf.roleSpecificConf.appArgs: _*)
    -      .build()
     
    +    val driverContainer =
    +      if (driverDockerContainer == "driver-py") {
    --- End diff --
    
    @ifilonenko I think this still needs some work to clean up.
    
    What I expect to happen is to have three step types:
    
    1. `BasicDriverFeatureStep`, which is what's here except we don't provide 
the args to the container in this step anymore.
    2. `PythonDriverFeatureStep` which does both what the 
`PythonDriverFeatureStep` does currently plus adds the `driver-py` argument
    3. `JavaDriverFeatureStep` which only adds the argument 
`SparkLauncher.NO_RESOURCE`, `conf.roleSpecificConf.appArgs`, etc.
    
    Then in the `KubernetesDriverBuilder`, always apply the first step, and 
select which of 2 or 3 to apply based on the app resource type.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to