Which interpreter is pending ? It is possible that spark interpreter pending 
due to yarn resource capacity if you run it in yarn client mode

If it is pending, you can check the log first.



Best Regard,
Jeff Zhang


From: Belousov Maksim Eduardovich 
<m.belou...@tinkoff.ru<mailto:m.belou...@tinkoff.ru>>
Reply-To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Date: Monday, October 2, 2017 at 9:26 PM
To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Subject: Is any limitation of maximum interpreter processes?

Hello, users!

Our analysts run notes with such interpreters: markdown, one or two jdbc and 
pyspark. The interpreters are instantiated Per User in isolated process and Per 
Note in isolated process.

And the analysts complain that sometimes paragraphs aren't processed and stay 
in status 'Pending'.
We noticed that it happen when number of started interpreter processes is about 
90-100.
If admin restarts one of the popular interpreter (that is killing some 
interpreter processes), the paragraphs become 'Running'.

We can't see any workload on zeppelin server when paragraphs are pended. RAM is 
sufficiently, iowait ~ 0
Also we can't find out any parameters about maximum interpreter processes.


Has anyone of you faced the same problem? How can this problem be solved?


Thanks,

Maksim Belousov


Reply via email to