it does seem like there are minor problems there. Note that
releasing the individual lock actually may be more cpu intensive
then just letting it get released as part of the transaction - but
there is probably a slight memory savings to be had.
I think isolation level 3 probably could be changed
the logic is slightly different dependent on isolation level,
what isolation level are you running. All the code gets the
table level intent lock first, and if that succeeds then checks
if it has covering locks such that it does not need to get row
locks.
I tried an experiment at lock level 3,
Mike Matrigali wrote:
the logic is slightly different dependent on isolation level,
what isolation level are you running. All the code gets the
table level intent lock first, and if that succeeds then checks
if it has covering locks such that it does not need to get row
locks.
The code is in th
the logic is slightly different dependent on isolation level,
what isolation level are you running. All the code gets the
table level intent lock first, and if that succeeds then checks
if it has covering locks such that it does not need to get row
locks.
The code is in the lockContainer() routi
I ran the following experiment, with somewhat surprising results:
create table a (a integer);
autocommit off;
lock table a in exclusive mode;
select * from syscs_diag.lock_table;
insert into a values (1);
select * from syscs_diag.lock_table; -- Note (1) below
commit;
select * from syscs_diag.lo