I think if your job is running and you want to deploy a new jar which is the 
new version for the other, spark will think the new jar is another job ,
they distinguish job by  Job ID , so if you want to replace the jar ,you have 
to kil job every time;


------------------ ???????? ------------------
??????:  "Mina Aslani"<aslanim...@gmail.com>;
????????:  2018??11??29??(??????)????2:44
??????:  "user @spark"<user@spark.apache.org>;

????:  Do we need to kill a spark job every time we change and deploy it?



Hi,

I have a question for you. 
Do we need to kill a spark job every time we change and deploy it to cluster? 
Or, is there a way for Spark to automatically pick up the recent jar version?



Best regards,
Mina

Reply via email to