Hi,

Setup: Standalone cluster with 32 workers, 1 master
I am running a long running streaming spark job (read from Kafka -> process
-> send to Http endpoint) which should ideally never stop.

I have 2 questions:
1) I have seen some times Driver is still running but application marked as
*Finished*. *Any idea why this happens or any way to debug this?*
Sometimes after running for say 2-3 days (or 4-5 days - random timeframe)
this issue arises, not sure what is causing it. Nothing in logs suggests
failures or exceptions

2) Is there a way for Driver to kill itself instead of keeping on running
without any application to drive?

Thanks,
KP

Reply via email to