No, proxies have a rather short and defensive socket retention time. Otherwise 
they would too quickly become stuck and run out ot sockets…

LieGrue,
strub

> Am 11.06.2015 um 18:51 schrieb Romain Manni-Bucau <[email protected]>:
> 
> Le 11 juin 2015 09:49, "Mark Struberg" <[email protected]> a écrit :
>> 
>> Hmm, if there is a network issue then the connection should actually get
> closed, right?
> 
> Or not depends the setup but network but maybe not the right word. A
> proxy/firewall can hold it for weeks even if behind there is nore any
> instance/db.
> 
>> So the lock gets cleaned and you will see an Exception in your app.
>> 
>> Sounds really like a weird issue and I did not yet see this on any system
> yet. The more important it is to get behind the real reason why it stales
> for you.
>> 
>> Let’s try to dissect this from top-down.
>> 
>> You have 507 threads overall:
>> ~/tmp/delete/log$>grep java.lang.Thread.State log1.txt  | wc -l
>> 507
>> 
>> Almost all of them are in a waiting state:
>> ~/tmp/delete/log$>grep "java.lang.Thread.State: TIMED_WAITING" log1.txt
> |wc -l
>> 459
>> 
>> 
>> There are 64 connections waiting for the database, right?
>> 
>> ~/tmp/delete/log$>grep
> org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection\(ConnectionPool.java:650
> log1.txt |wc -l
>> 64
>> 
>> 
>> We have quite a few EJBs waiting on a lock:
>> ~/tmp/delete/log$>grep org.apache.openejb.util.Pool.pop\(Pool.java:224
> log1.txt | wc -l
>> 139
>> But that are by far not all threads.
>> 
>> We also have 5 open sockets waiting for a connection:
>> ~/tmp/delete/log$>grep
> "java.net.ServerSocket.accept.ServerSocket.java:513“ log1.txt | wc -l
>> 5
>> 
>> Tons of small fishes, like 28 hazelcast threads, etc…
>> 
>> And one ‚big fish‘
>> 
>> ~/tmp/delete/log$>grep org.apache.openejb.pool.scheduler log1.txt | wc -l
>> 233
>> That confuses me a bit to be honest.
>> 
>> LieGrue,
>> strub
>> 
>> 
>>> Am 10.06.2015 um 20:21 schrieb Romain Manni-Bucau <[email protected]
>> :
>>> 
>>> 2015-06-10 11:17 GMT-07:00 Matej <[email protected]>:
>>> 
>>>> Hi.
>>>> 
>>>> Maybe what i forgot to tell is that when looking at the database we see
>>>> many connections like that:
>>>> 
>>>> IDLE IN TRANSACTION, mayn (ALL)  transaction that are long running,
> waiting
>>>> for commit statemet which somehow doesn't happen.
>>>> 
>>>> And the CPU load on DB and App server is almost 0, so the system does
> seem
>>>> to lock up.
>>>> 
>>>> Romain, last question can 2 or more transaction share the same
> connection,
>>>> maybe this could cause some locks to deadlock?....
>>>> 
>>>> 
>>> shouldnt normally
>>> 
>>> Now on what you observe it really looks like the connection is somehow
> lost
>>> on client side. Maybe try to kill connection on DB side, it could throw
> an
>>> error on application side and unlock the app maybe. If so the network
> has
>>> an issue somewhere.
>>> 
>>> 
>>>> 
>>>> BR
>>>> 
>>>> Matej
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 2015-06-10 20:01 GMT+02:00 Romain Manni-Bucau <[email protected]>:
>>>> 
>>>>> 2015-06-10 10:59 GMT-07:00 Matej <[email protected]>:
>>>>> 
>>>>>> Hi Romain.
>>>>>> 
>>>>>> We lookef for DB locks, but it's only SELECT and INSERT statements,
>>>> which
>>>>>> should not cause locking on postgresql. And we also looked for locks
>>>>> using
>>>>>> postgresql debuging, analytics. So we suspected the network and the
>>>> JDBC
>>>>>> driver, we did some TCP stack tuning, but also no major change.
>>>>>> 
>>>>>> 
>>>>> was more thinking about locks in the datasource pool than on db side
> (ie
>>>>> pool size too small)
>>>>> 
>>>>> 
>>>>>> Everything does run on VMWARE, so maybe this could be also something
> to
>>>>>> look at. We could add some more Tomcat DB pool timeouts, but this is
> a
>>>>>> dirty "fix".
>>>>>> 
>>>>> 
>>>>> set the timeout to maybe 0 to set the app failing is there is not
> enough
>>>>> connections in the pool
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Can this be a phenomen of not enough connections in pool. But I would
>>>>>> suspect a performance drop not lock.
>>>>>> 
>>>>> 
>>>>> well i dont see locks in the logs but "locked" which means waiting for
>>>>> something to happen (getting the connection)
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Can somebody explain this: is 1 transaction == 1 connection.  So
>>>>> connected
>>>>>> beans share the same connection and transcation?
>>>>>> 
>>>>> 
>>>>> by datasource you have 1 connection per transaction yes
>>>>> 
>>>>> 
>>>>>> 
>>>>>> BR
>>>>>> 
>>>>>> Matej
>>>>>> 
>>>>>> 2015-06-10 19:24 GMT+02:00 Romain Manni-Bucau <[email protected]
>> :
>>>>>> 
>>>>>>> well before playing with config the idea is to know what happened,
> is
>>>>> the
>>>>>>> network quality the cause for instance?
>>>>>>> 
>>>>>>> 
>>>>>>> Romain Manni-Bucau
>>>>>>> @rmannibucau <https://twitter.com/rmannibucau> |  Blog
>>>>>>> <http://rmannibucau.wordpress.com> | Github <
>>>>>>> https://github.com/rmannibucau> |
>>>>>>> LinkedIn <https://www.linkedin.com/in/rmannibucau> | Tomitriber
>>>>>>> <http://www.tomitribe.com>
>>>>>>> 
>>>>>>> 2015-06-10 10:22 GMT-07:00 Howard W. Smith, Jr. <
>>>>> [email protected]
>>>>>>> :
>>>>>>> 
>>>>>>>> On Wed, Jun 10, 2015 at 1:18 PM, Romain Manni-Bucau <
>>>>>>> [email protected]
>>>>>>>>> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Hi Matej
>>>>>>>>> 
>>>>>>>>> looks like a database issue for me (stateless pool is waiting for
>>>>> the
>>>>>>>>> stateless trying to get the db connection to be released), hasnt
>>>>> the
>>>>>> DB
>>>>>>>>> connection be lost or something like that?
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> what is the best way for Matej to improve that? tomcat jdbc pool
>>>>>>> settings?
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 

Reply via email to