Thanks for the useful links. I really appreciate your help.
Let's assume that even though the table is indexed on the PK field, that the
entire table is locked by the tx that executes the delete. That doesn't explain
why a *deadlock* occurs. The only way a deadlock could occur with table-locking
is if transaction A has TABLE1 locked and requests a table lock on TABLE2.
Concurrently, transaction B has Table 2 locked, and requests a table-lock on
TABLE1. That would cause a deadlock.
But as you can see from the deadlock dump below, I am only executing a simple
DELETE FROM <TABLE> WHERE PK=<_PK>.
I just can't understand why that would lead to a *deadlock*. I can (grudgingly)
accept that the transaction would lock the table, but why would lead to a
deadlock?
Also, if you could point me to docmentation that explains the meaning of the
deadlock dump, that could be helpful. For example, what does this mean:
"Granted
XID
:
{7249,
IX}
,
{7250,
IX}"
thanks!
-geoff
----- Original Message ----
From: Oystein Grovlen - Sun Norway <[EMAIL PROTECTED]>
To: Derby Discussion <[email protected]>
Sent: Thursday, January 10, 2008 12:24:57 AM
Subject: Re: why are table locks being made?
Some
things
to
consider:
*
Isolation
level
is
only
relevant
when
locking
for
reads.
For
insert/update/delete
locking
is
needed
for
the
duration
of
the
transaction
in
order
to
guarantee
the
recoverability
of
the
database.
*
If
your
table
has
a
referential
constraint
to
other
tables,
a
delete
may
cause
cascading
deletes
in
referred
tables.
(Ref.
http://db.apache.org/derby/docs/10.3/ref/rrefsqlj13590.html)
*
The
optimizer
may
choose
to
use
table
locking
instead
of
row
locking,
if
it
thinks
that
will
be
more
efficient.
If
a
table
scan
is
used,
table
locking
will
certainly
be
used.
If
the
table
is
very
small,
the
optimizer
my
choose
to
use
a
table
scan
instead
of
an
index
look-up.
(E.g,
If
all
records
fit
in
one
page,
a
table
scan
will
only
access
one
page,
while
an
index
lookup
will
access
two
pages.)
See
http://db.apache.org/derby/docs/10.3/tuning/ctunoptimzoverride.html
for
advice
on
how
to
force
a
query
to
use
index
lookup.
Hope
this
clears
up
a
few
things.
Please,
do
not
hesitate
to
ask
more
questions
if
you
are
still
not
able
to
figure
out
how
to
solve
your
problem.
Posting
your
DDL
and
the
query
plans
will
also
make
it
easier
to
pinpoint
your
problem.
(See
http://wiki.apache.org/db-derby/PerformanceDiagnosisTips
for
advice
on
how
to
generate
query
plans).
--
Øystein
Geoff
hendrey
wrote:
>
I
execute
10
DELETE
operations.
Each
DELETE
is
executed
from
its
own
thread
and
deletes
a
different
row
from
the
same
table.
I
get
the
same
behavior
with
both
the
embedded
and
network
driver,
on
derby
10.3.14.
>
>
>
>
I
am
seeing
deadlocks.
Here
is
the
deadlock
dump:
>
>
A
lock
could
not
be
obtained
due
to
a
deadlock,
cycle
of
locks
and
waiters
is:
>
>
Lock
:
TABLE,
GEOFF__BLOG__USER_MESSAGES,
Tablelock
>
>
Waiting
XID
:
{7250,
IX}
,
BLOG,
DELETE
FROM
GEOFF__BLOG__USER
WHERE
"PK"=822
>
>
Granted
XID
:
{7249,
IX}
>
>
Lock
:
TABLE,
GEOFF__BLOG__USER,
Tablelock
>
>
Waiting
XID
:
{7249,
X}
,
BLOG,
DELETE
FROM
GEOFF__BLOG__USER
WHERE
"PK"=830
>
>
Granted
XID
:
{7249,
IX}
,
{7250,
IX}
>
>
.
The
selected
victim
is
XID
:
7250.
>
>
>
>
My
question
is,
why
are
TABLE
locks
being
issued?
Why
are
any
locks
at
all
being
issued?
I
have
set
the
transaction
isolation
level
to
READ_UNCOMMITTED.
My
JDBC
transactions
have
autocommit
false.
I
am
not
issuing
a
LOCK
TABLE
or
anything
else
that
should
cause
the
Tablelock.
>
>
>
>
Also,
since
I
never
execute
any
delete
against
"GEOFF__BLOG__USER_MESSAGES",
I
am
confused
as
to
why
this
table
appears
in
the
dump.
You
can
see
that
the
delete
statements
both
delete
a
different
row
from
GEOFF__BLOG__USER,
*not*
from
GEOFF__BLOG__USER_MESSAGES.
Is
it
possible
that
there
is
some
bug
that
causes
derby
to
get
confused
about
the
long
table
names,
one
of
which
starts
with
the
other?
>
>
>
>
Any
suggestions?
>
>
>