So one thing I've noticed, if I turn on as much debug logging as I can
find, is that when reverse proxy is enabled I end up with a lot of
these threads being logged:

17/05/31 21:06:37 DEBUG SelectorManager: Starting Thread[MasterUI-218-s
elector-ClientSelectorManager@36a67cc9/14,5,main] on org.spark_project.
jetty.io.SelectorManager$ManagedSelector@7bfecdf7 keys=0 selected=0

I mean, a lot.  They appear to be periodic.

Those are not present in the log when reverse proxy is off.  So I'm
wondering if some jetty resource or thread pool is
being overrun (but I know nothing about jetty).

Spark any ideas?

Best,

Trevor

On Wed, 2017-05-31 at 12:46 -0700, tmckay wrote:
> Hi folks,
> 
> >   I'm running a containerized spark 2.1.0 setup.  If I have reverse
proxy
> > turned on, everything is fine if I have fewer than 10 workers.  If I
create
> a cluster with 10 or more, the master web ui is unavailable.
> 
> > This is even true for the same cluster if I start with a lower number
and
> > add workers to bring the total above 10.  The web ui is available
until I
> cross that boundary, then becomes unavailable.
> 
> > If I run without reverse proxy, I can create clusters of 10+ (20, 30,
40
> ...) and the console is always available.
> 
> > Any ideas? Has anyone seen this behavior on bear metal, or VMs, or
with
> containers?
> > Any logging I can turn on in the master webserver that might shed
some
> light?
> 
> Thanks,
> 
> Trevor McKay
> 
> 
> 
> --
> > > View this message in context: http://apache-spark-user-
list.1001560.n3.nabble.com/Problem-with-master-webui-with-reverse-
proxy-when-workers-10-tp28729.html
> > Sent from the Apache Spark User List mailing list archive at
Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> 

Reply via email to