geosmart commented on issue #5648:
URL: 
https://github.com/apache/dolphinscheduler/issues/5648#issuecomment-869334970


   @chengshiwen @blackberrier  thanks for your advice, and try  to do a small 
stage conclusion.
   
   when `provide k8s to run spark app`, we need do:
   
   # environment prepare
   * build spark on k8s docker image for special spark version.
   * collect spark driver and executor log by `elk` or som other stack
   * collect metrics and monitor spark cluster by `promethus` or som other stack
   
   # ds-frontend
   * `deployMode`: k8s,yarn,local
   * `sparkVersion`: support custom version when deployed with `k8s`
   
   # ds-backend
   ## submit spark app (worker)
   * spark-submit: using `spark-submit` cmd to submit a app
   * [spark-on-k8s 
operator](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator): using 
`kubectl apply -f <updated YAML file>` to submit a app
   
   ## check spark app status(worker)
   * uing `kubectl describe sparkapplications <name> ` by  `spark-on-k8s 
operator`  
   
   ## kill spark app(master and worker)
   * using `namespace` and `<driver-pod-name>` (killing driver pod is actually 
the way to stop the Spark Application during the execution)
   * uing `kubectl delete <name> ` by  `spark-on-k8s operator`  
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to