Yikun commented on a change in pull request #35776:
URL: https://github.com/apache/spark/pull/35776#discussion_r822271553



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/VolcanoFeatureStep.scala
##########
@@ -43,19 +44,27 @@ private[spark] class VolcanoFeatureStep extends 
KubernetesDriverCustomFeatureCon
   }
 
   override def getAdditionalPreKubernetesResources(): Seq[HasMetadata] = {
-    val podGroup = new PodGroupBuilder()
-      .editOrNewMetadata()
-        .withName(podGroupName)
-        .withNamespace(namespace)
-      .endMetadata()
-      .editOrNewSpec()
-      .endSpec()
+    val client = new DefaultVolcanoClient
 
-    queue.foreach(podGroup.editOrNewSpec().withQueue(_).endSpec())
+    val template = if (kubernetesConf.isInstanceOf[KubernetesDriverConf]) {
+      kubernetesConf.get(KUBERNETES_DRIVER_PODGROUP_TEMPLATE_FILE)
+    } else {
+      kubernetesConf.get(KUBERNETES_EXECUTOR_PODGROUP_TEMPLATE_FILE)
+    }
+    val pg = template.map(client.podGroups.load(_).get).getOrElse(new 
PodGroup())
+    var metadata = pg.getMetadata
+    if (metadata == null) metadata = new ObjectMeta
+    metadata.setName(podGroupName)
+    metadata.setNamespace(namespace)
+    pg.setMetadata(metadata)
 
-    
priorityClassName.foreach(podGroup.editOrNewSpec().withPriorityClassName(_).endSpec())
+    var spec = pg.getSpec
+    if (spec == null) spec = new PodGroupSpec
+    queue.foreach(spec.setQueue(_))
+    priorityClassName.foreach(spec.setPriorityClassName(_))

Review comment:
       OK, I can understand it. This is not only exploring all possibility, but 
also there are many users want to specify priority more flexiable in job level.
   
   An alternative way to help user use priority conveniently, we still keep 
this as your current implemtation, consider priority scheduling is very common 
case, do you think we could introduce a configuration like 
`spark.kuberentes.driver.PriorityClassName` to simplify config template? This 
also help both **native default scheduler** and also **custom scheduler** to 
specify spark job priority easily.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to