Thank you for the reply.

I am not a Spark expert but I was reading through the code and I thought
that the state was changed from SUBMITTED to RUNNING only after executors
(CoarseGrainedExecutorBackend) were registered.
https://github.com/apache/spark/commit/015f7ef503d5544f79512b6333326749a1f0c48b#diff-a755f3d892ff2506a7aa7db52022d77cR95

As you mentioned that Launcher has no idea about executors, probably my
understanding is not correct.



SparkListener is an option but it has its own pitfalls. 
1) If I use spark.extraListeners, I get all the events but I cannot
customize the Listener, since I have to pass the class as a string to
spark-submit/Launcher. 
2) If I use context.addSparkListener, I can customize the listener but then
I miss the onApplicationStart event. Also, I don't know the Spark's logic to
changing the state of application from WAITING -> RUNNING.

Maybe you can answer,
If I have a Spark job which needs 3 executors and cluster can only provide 1
executor, will the application be in WAITING or RUNNING ?
If I know the Spark's logic then I can program something with
SparkListener.onExecutorAdded event to correctly figure out the state.

One other alternate can be to use Spark Master Json (http://<>:8080/json),
but the problem with this is that it returns everything and I was not able
to find any way to filter ......



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to