[image: image.png]
参考SparkLauncher 也是支持yarn 和k8s
提交,但是一个异步过程,通过轮询能够实时获取不同状态信息,ApplicationDeployer.run
只能同步等待appid 返回。需要把appid 记录到数据库,打印在日志不方便获取,如果图片看不到,请参考代码:
https://github.com/melin/spark-jobserver/blob/master/jobserver-admin/src/main/java/io/github/melin/spark/jobserver/deployment/YarnSparkDriverDeployer.java
  buildJobServer 方法
https://github.com/melin/flink-jobserver/blob/master/jobserver-admin/src/main/java/io/github/melin/flink/jobserver/deployment/YarnApplicationDriverDeployer.java
 startApplication 方法


Yang Wang <danrtsey...@gmail.com> 于2022年11月24日周四 09:54写道:

> Just Kindly remind, you attached images could not show normally.
>
> Given that *ApplicationDeployer* is not only used for Yarn application
> mode, but also native Kubernetes, I am not sure which way you are referring
> to return the applicationId.
> We already print the applicationId in the client logs. Right?
>
> Best,
> Yang
>
> melin li <libinsong1...@gmail.com> 于2022年11月23日周三 23:46写道:
>
> > The task is submitted by ApplicationDeployer api, and the run is
> > synchronous and waiting for the submission to be completed. If the task
> is
> > submitted to yarn, it is probably accepted and the yarn applicationID is
> > not obtained at this time. It is difficult to cancel the task.Recommended
> > to org. apache. spark.launcher.SparkLauncher design, asynchronous
> > submission tasks, can obtain applicationId as soon as possible, if you
> want
> > to delete the task ahead of time, direct yarn application - kill XXX;
> >
> > [image: image.png]
> > [image: image.png]
> >
>

回复