Hm, thanks, I'll check..

2015-01-06 23:31 GMT+03:00 Stack <st...@duboce.net>:

> The threads that are sticking around are tomcat threads out of a tomcat
> executor pool. IIRC, your server has high traffic.  The pool is running up
> to 800 connections on occasion and taking a while to die back down?
> Googling, seems like this issue comes up frequently enough. Try it
> yourself.  If you can't figure something like putting a bound on the
> executor, come back here and we'll try and help you out.
>
> St.Ack
>
> On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>
> > Hi, yes, it was me.
> > I've followed advices, ZK connections on server side are stable.
> > Here is current state of Tomcat:
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
> > There are more than 800 threads and daemon threads.
> >
> > and the state of three ZK servers:
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png
> >
> > here is pastebin:
> > http://pastebin.com/Cq8ppg08
> >
> > P.S.
> > Looks like tomcat is running on OpenJDK 64-Bit Server VM.
> > I'll ask to fix it, it should be Oracle 7 JDK
> >
> > 2015-01-06 20:43 GMT+03:00 Stack <st...@duboce.net>:
> >
> > > On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <
> serega.shey...@gmail.com
> > >
> > > wrote:
> > >
> > > > yes, one of them (random) gets more connections than others.
> > > >
> > > > 9.3.1.1 Is OK.
> > > > I have 1 HConnection for logical module per application and each
> > > > ServletRequest gets it's own HTable. HTable closed each tme after
> > > > ServletRequest is done. HConnection is never closed.
> > > >
> > > >
> > > This is you, right: http://search-hadoop.com/m/DHED4lJSA32
> > >
> > > Then, we were leaking zk connections.  Is that fixed?
> > >
> > > Can you reproduce in the small?  By setting up your webapp deploy in
> test
> > > bed and watching it for leaking?
> > >
> > > For this issue, can you post a thread dump in postbin or gist so can
> see?
> > >
> > > Can you post code too?
> > >
> > > St.Ack
> > >
> > >
> > >
> > > > 2015-01-05 21:22 GMT+03:00 Ted Yu <yuzhih...@gmail.com>:
> > > >
> > > > > In 022_zookeeper_metrics.png, server names are anonymized. Looks
> like
> > > > only
> > > > > one server got high number of connections.
> > > > >
> > > > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client
> ?
> > > > >
> > > > > Cheers
> > > > >
> > > > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <
> > > serega.shey...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi, here is repost with images link
> > > > > >
> > > > > > Hi, I'm still trying to deal with apache tomcat web-app and hbase
> > > HBase
> > > > > > 0.98.6
> > > > > > The root problem is that user threads constantly grows. I do get
> > > > > thousands
> > > > > > of live threads on tomcat instance. Then it dies of course.
> > > > > >
> > > > > > please see visualVM threads count dynamics
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > > > > >
> > > > > >
> > > > > > Please see selected thread. It should be related to zookeeper
> > > (because
> > > > of
> > > > > > thread-name suffix SendThread)
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > > > > >
> > > > > > The threaddump for this thread is:
> > > > > >
> > > > > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > > > > >    java.lang.Thread.State: WAITING
> > > > > > at sun.misc.Unsafe.park(Native Method)
> > > > > > - parking to wait for <34671cea> (a
> > > > > >
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > > > > at
> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > > > > at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > > > > at
> > > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > > > > >
> > > > > >    Locked ownable synchronizers:
> > > > > > - None
> > > > > >
> > > > > > Why does it live "forever"? I next 24 hours I would get ~1200
> live
> > > > > theads.
> > > > > >
> > > > > > "visit thread" does simple put/get by key, newrelic says it takes
> > > 30-40
> > > > > ms
> > > > > > to respond.
> > > > > > I just set a name for the thread inside servlet method.
> > > > > >
> > > > > > Here is CPU profiling result:
> > > > > >
> > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > > > > >
> > > > > > Here is zookeeper status:
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > > > > >
> > > > > > How can I debug and get root cause for long living threads? Looks
> > > like
> > > > I
> > > > > > got threads leaking, but have no Idea why...
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yuzhih...@gmail.com>:
> > > > > >
> > > > > > > I used gmail.
> > > > > > >
> > > > > > > Please consider using third party site where you can upload
> > images.
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to