Hi,

I am trying to create multiple notebooks connecting to spark on yarn. After
starting few jobs my cluster went out of containers. All new notebook
request are in busy state as Jupyter kernel gateway is not getting any
containers for master to be started.

Some job are not leaving the containers for approx 10-15 mins. so user is
not able to figure out what is wrong, why his kernel is still in busy state

Is there any property or hack by which I can return valid response to users
that there are no containers left.

can I label/mark few containers for master equal to max kernel execution I
am allowing in my cluster. so that if new kernel starts he will at least
one container for master. it can be dynamic on priority based. if there is
no container left then yarn can preempt some containers and provide them to
new requests.


-- 

Thanks & Regards

Sachin Aggarwal
7760502772

Reply via email to