Mike Matrigali wrote:
also just posting the deadlock message with the lock information about the locks in the deadlock cycle will answer a lot of questions.
It may depend if there are indexes, it may depend on how many rows
are in the table and if escalation happened.  Also make sure to
include what version of the software you are running.

I'm running Derby 10.2.1.6.

The message is:

2007-01-31 16:57:53.828 GMT Thread[reader0,5,main] (XID = 6159), (SESSIONID = 231), (DATABASE = target/test-db), (DRDAID = null), Failed Statement is: select distinct i.* from item i, cal_timerange_index ctri1 where i.parentid=? and i.isactive=1 and ctri1.itemid=i.id and (ctri1.componenttype = ? and ((ctri1.startdate < case when ctri1.isfloating=1 then '20080218T115000' else ? end and ctri1.enddate > case when ctri1.isfloating=1 then '20080117T115000' else ? end) or (ctri1.startdate >= case when ctri1.isfloating=1 then '20080117T115000' else ? end and ctri1.startdate < case when ctri1.isfloating=1 then '20080218T115000' else ? end)))

So the select statement is a join of two tables. The statement returns about 30 rows. There are roughly 300 rows in cal_timerange_index. The update transaction removes all these rows and adds the same amount, while the read transaction selects 30 of the rows that are being deleted by the update transaction.

The lock dump shows:

*** The following row is the victim ***
6159 |ROW |S |0 |(2,110) |WAIT |T |NULL |CAL_TIMERANGE_INDEX |

Here are the locks involved:
6154 |ROW |X |1 |(2,110) |GRANT|T|NULL|CAL_TIMERANGE_INDEX | 6159 |ROW |S |1 |(10,1)|GRANT|T|SQL070131105108180|CAL_TIMERANGE_INDEX | 6160 |ROW |X |0 |(10,1)|WAIT |T|SQL070131105108180 |CAL_TIMERANGE_INDEX |

There are also these table locks:
6154 |TABLE |IX |315 |Tablelock |GRANT|T|NULL |CAL_TIMERANGE_INDEX | 6159 |TABLE |IS |2 |Tablelock |GRANT|T |NULL |CAL_TIMERANGE_INDEX | 6160 |TABLE |IX |1 |Tablelock |GRANT|T |NULL |CAL_TIMERANGE_INDEX

A couple things to note. There are 3 transactions, yet only 2 should be active at a time (1 writer thread, 1 reader thread). Also none of these transactions is in a traditional deadlock (where there are waiting on each other). Instead, the deadlock occurs because tx6154 doesn't release its locks. 6154 isn't waiting on anything. Its almost as if the locks aren't getting released. I think i'm going to try to work on a simpler example to reproduce this.

-Randy

Kristian Waagan wrote:
Randy Letness skrev:

A little more info:

I have a test program with a "writer thread", a thread that continuously performs update transactions, and a "reader thread", a thread that does selects on the data that is being updated. So with two threads, there should be at most 2 transactions in progress at any instant. When the deadlock occurs, the lock dump table shows otherwise. There are 3 distinct transaction ids (1,2,3 for simplicity). Looking at the locks, I can tell that tx1 and tx3 are update transactions (they have exclusive locks) and tx2 is the read tx (has shared locks). This doesn't make sense because there is only a single thread doing updates so there should only be a single update transaction active. Tx2 is waiting on a lock held by tx1 and tx3 is waiting on a lock held by tx2. For some reason the locks held by tx1 aren't released, and that is why the deadlock occurs. Its almost as if the locks from the update transaction aren't being fully released after a commit. Tx1 isn't waiting on anything, but the locks are still there. So something screwy is going on. Is there such thing as a phantom lock?

Also, I can't reproduce this on a MySQL database. I'm using the same code, different JDBC driver.

Hello Randy,

I recommend you log a Jira issue with a simple reproduction script/program. Then the community can/may have a look at it and determine if it is a bug or not. Even if it turns out not to be a bug, it will be nice to have it documented in the issue tracking system with an explanation of why Derby behaves this way.

Jira (Derby's issue tracking system) can be accessed here: https://issues.apache.org/jira/browse/DERBY


thanks,


Reply via email to