Github user Tagar commented on the issue:
https://github.com/apache/zeppelin/pull/2631
Thank you @zjffdu.
I just thought about this scenario: a Spark job runs for 1.5 hours, would
it be killed by the LifeCycleManager in this case? (assuming here default
timeout of 1 hour)
If it is, then might be nice also to have a grace period when an
interpreter wouldn't be killed if it has a running job?
In the above example, let's say timeout=1 hour and grace period=1 hour. So
an interpreter would be killed if it is completely inactive for 1 hour, or in 2
hours if it had a Spark job that was still spinning.
Thoughts?
---