So to clarify, record-level consistency (and primary/secondary index
consistency) is guaranteed and will work "correctly" in all cases if a
record R is updated (or deleted) by T2 after being targeted (by primary
key) for deletion by T1. The only semantic issue is that there is a
(hopefully very, very small) window of time between when T1 sees R in a
secondary index and when it acquires for the lock on R's primary key -
during which T2 could change R in a way that makes it no longer
query-compliant. (However, at its time of being observed - which
happened under read-committed - it was a correct candidate for
deletion. So this is kind of "expected" but admittedly kind of weird.
It seems like this could maybe be fixed in the future via a mechanism
similar to the index-only branch's way of handling locks?)
On 6/24/16 10:59 AM, Young-Seok Kim wrote:
This is somewhat expected issue by having read-committed isolation level
based on strict 2PL locking protocol.
The strict 2PL guarantees that all acquired exclusive locks by a
transaction can be released after the transaction is committed.
But, read lock doesn't follow this.
So, as you described in the email, a record read by a transaction, T1
during search can be modified by another transaction T2 before the record
is deleted by T1. This is a possible situation under the read-committed
isolation level.
However, there is no inconsistency between a primary index and secondary
indexes in the way that the modified record by T2 is deleted by T1 from the
primary index and the corresponding secondary index entry may not be
deleted by T1. This is because when T1 starts deleting process through the
job pipeline, an exclusive lock for the record is first acquired and then
the delete operations in primary and secondary indexes are performed. So,
either case1) the record should exist with the identical primary key for
the record to be deleted by T1 (since the search will deliver the primary
key, not the complete record) or case2) the record will not be deleted by
T1 if the record with the primary key does not exist.
For case1), once a record is deleted from the primary index, all rest of
secondary indexes in the job pipeline correctly find and delete the
corresponding secondary index entries.
For case2), I need to check the behavior whether the job pipeline throws an
exception due to trying to delete the non-existing record and stops
proceeding the job by aborting the job, or the exception is just swallowed
and the job proceeds for the next record.
Best,
Young-Seok
On Fri, Jun 24, 2016 at 10:14 AM, abdullah alamoudi <[email protected]>
wrote:
Hi everyone,
I think we have a problem related to the deletes transaction behavior:here
is the problem:
Our delete starts by searching the tree to identify delete tuples based on
the delete statement conditional clause. It follows that by inserting
delete tuples in primary index, followed by updating secondary indexes,
followed by a commit on the PK
The problem happens if after searching the tree and identifying the records
to be deleted, one of those records was updated. This will cause the record
to be deleted in the primary index even though it might not meet the
conditional clause. Moreover, the new entries in the secondary indexes will
remain without their record in the primary index.
In order to fix this, we need to do one of the following:
1. lock the records when we do the search to identify the records to be
deleted
OR
2. when performing the delete, we double check that the record we're
deleting is the same as the record we find when we do the actual delete
A better way would be to perform the delete as we do the search since there
is no need to do the whole search, materialize then perform the delete.
There is a change I got something wrong. Did I? Thoughts?