Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
Make sense, will add document for this feature.
---
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2631
could we add some documentation on this? this sounds important...
---
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
Will merge it if no more comments
---
Github user jongyoul commented on the issue:
https://github.com/apache/zeppelin/pull/2631
Got it. Thanks. I will be able to help you for the future feature.
---
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@Tagar Good point, Currently there's no such things in frontend. For now,
what user see in frontend is that a new interpreter group is created. (For
spark interpreter, a new spark app is started)
Github user Tagar commented on the issue:
https://github.com/apache/zeppelin/pull/2631
One last thing - from user experience it would be convenient to know when
their interpreters timed out.
Something like a popup or just some sort of a graphical flag would do, I
guess?
Not
Github user Tagar commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@zjffdu got it - thank you.
---
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@Gauravshah It won't be killed. Because JobRunner in zeppelin server
process will poll the job status periodically . Add one more test to verify
it.
---
Github user Gauravshah commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@zjffdu what if I am not on my desk and not polling the job ?
---
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@jongyoul For now, only Interpreter Process's lifecycle is controlled in
TimeoutLifecycleManager. session level control will be done in future if
necessary.
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@Tagar It won't be killed, because LifecycleManager will know client is
polling job progress via api Interpreter.getProgress.
Github user Tagar commented on the issue:
https://github.com/apache/zeppelin/pull/2631
Thank you @zjffdu.
I just thought about this scenario: a Spark job runs for 1.5 hours, would
it be killed by the LifeCycleManager in this case? (assuming here default
timeout of 1 hour)
If
Github user jongyoul commented on the issue:
https://github.com/apache/zeppelin/pull/2631
I have a basic question. Does it work in `scoped` and `isolated`?
---
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2631
@Leemoonsoo @jongyoul @Tagar Please help review. Thanks
---
14 matches
Mail list logo