[ 
https://issues.apache.org/jira/browse/DERBY-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935530#comment-13935530
 ] 

Mike Matrigali edited comment on DERBY-6510 at 3/14/14 7:44 PM:
----------------------------------------------------------------

i took a quick look at normal vs problem prstat.  looks like more cpu java 
process as reported
by prstat in the problem case.  not sure what else to tell there.

when it happens again it may be interesting to get stats on individual threads 
so that maybe
we can see if for some reason the optmizer thread is waiting on a system 
resource.  I am not
sure what OS is at your user site - here is a link at the end talking about 
figuring out what
individual threads are doing:
http://www.scalingbits.com/performance/prstat


was (Author: mikem):
when it happens again it may be interesting to get stats on individual threads 
so that maybe
we can see if for some reason the optmizer thread is waiting on a system 
resource.  I am not
sure what OS is at your user site - here is a link at the end talking about 
figuring out what
individual threads are doing:
http://www.scalingbits.com/performance/prstat

> Deby engine threads not making progress
> ---------------------------------------
>
>                 Key: DERBY-6510
>                 URL: https://issues.apache.org/jira/browse/DERBY-6510
>             Project: Derby
>          Issue Type: Bug
>          Components: Network Server
>    Affects Versions: 10.9.1.0
>         Environment: Oracle Solaris 10/9, Oracle M5000 32 CPU, 128GB memory, 
> 8GB allocated to Derby Network Server
>            Reporter: Brett Bergquist
>            Priority: Critical
>         Attachments: dbstate.log, derbystacktrace.txt, prstat.log, 
> prstat_normal.log, queryplan.txt, queryplan_nooptimizerTimeout.txt
>
>
> We had an issue today in a production environment at a large customer site.   
> Basically 5 database interactions became stuck and are not progressing.   
> Part of the system dump performs a stack trace every few seconds for a period 
> of a minute on the Glassfish application server and the Derby database engine 
> (running in network server mode).   Also, the dump captures the current 
> transactions and the current lock table (ie. syscs_diag.transactions and 
> syscs_diag.lock_table).   We had to restart the system and in doing so, the 
> Derby database engine would not shutdown and had to be killed.
> The stack traces of the Derby engine show 5 threads that are basically making 
> no progress in that at each sample, they are at the same point, waiting.
> I will attach the stack traces as well as the state of the transactions and 
> locks.   
> Interesting is that the "derby.jdbc.xaTransactionTimeout =1800" is set, yet 
> the transactions did not timeout.  The timeout is for 30 minutes but the 
> transactions were in process for hours.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to