[
https://issues.apache.org/jira/browse/DERBY-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680196#action_12680196
]
Kathey Marsden edited comment on DERBY-4055 at 3/9/09 9:58 AM:
---------------------------------------------------------------
I was working with a user who found they could work around this issue with use
of better indexing. They had a simultaneous update and select which were
accessing different rows, but still they saw this problem occur. Putting an
index on the column from which they were doing the select avoided a table scan
and thus avoided the issue. Hopefully this will work as a workaround for
others until we can get this issue fixed.
was (Author: kmarsden):
I was working with a user who found they could work around this issue with
use of better indexing. They had a simultaneous update and select which were
accessing different rows, but still they saw this problem occur. Putting an
index on the column from which they were ding the select avoided a table scan
and thus avoided the issue. Hopefully this will work as a workaround for
others until we can get this issue fixed.
> Space may not be reclaimed if row locks are not available after three
> retries
> -------------------------------------------------------------------------------
>
> Key: DERBY-4055
> URL: https://issues.apache.org/jira/browse/DERBY-4055
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0
> Reporter: Kathey Marsden
> Attachments: derby.log.T_RawStoreFactoryWithAssert
>
>
> In a multithreaded clob update where the same row is being updated, space
> will not be reclaimed. The offending code is in ReclaimSpaceHelper:
> RecordHandle headRecord = work.getHeadRowHandle();
> if (!container_rlock.lockRecordForWrite(
> tran, headRecord, false /* not insert */, false /* nowait */))
> {
> // cannot get the row lock, retry
> tran.abort();
> if (work.incrAttempts() < 3)
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get row lock " +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
> If we cannot get the lock after three tries we give up. The reproduction for
> this issue is in the test
> store.ClobReclamationTest.xtestMultiThreadUpdateSingleRow().
> This issue also used to reference the code below and has some references to
> trying to get a reproduction for that issue, but that work has moved to
> DERBY-4054. Please see DERBY-4054 for work on the container lock issue.
> ContainerHandle containerHdl =
> openContainerNW(tran, container_rlock,
> work.getContainerId());
> if (containerHdl == null)
> {
> tran.abort();
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace, " aborted " + work +
> " because container is locked or dropped");
> }
> }
> if (work.incrAttempts() < 3) // retry this for serveral
> times
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get container lock "
> +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.