Could you just make Hadoop's resource manager (port 8088) available to your
users, and they can check available containers that way if they see the
launch is stalling?
Another option is to reduce the default # of executors and memory per
executor in the launch script to some small fraction of
Hi,
I am trying to create multiple notebooks connecting to spark on yarn. After
starting few jobs my cluster went out of containers. All new notebook
request are in busy state as Jupyter kernel gateway is not getting any
containers for master to be started.
Some job are not leaving the