[
https://issues.apache.org/jira/browse/DERBY-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kathey Marsden updated DERBY-4055:
----------------------------------
Priority: Major (was: Minor)
With revision 743867, checked in reproduction for the case where it cannot get
the row lock after three tries.
enable store.ClobReclamationTest.xtestMultiThreadUpdateSingleRow()
If multiple threads are trying to update the same row, reclamation doesn't
happen and with derby.debug.true=Daemon trace see in the log many instances of:
DEBUG DaemonTrace OUTPUT: gave up after 3 tries to get row lock Reclaim
COLUMN_CHAIN...
> Space may not be reclaimed if locks are not available after three retries
> (code inspection)
> -------------------------------------------------------------------------------------------
>
> Key: DERBY-4055
> URL: https://issues.apache.org/jira/browse/DERBY-4055
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0
> Reporter: Kathey Marsden
>
> I don't have a reproduction for these cases but there are two places in
> ReclaimSpaceHelper where reclamation will give up after three tries if it
> cannot obtain the lock to reclaim the space. The first code path is:
> ContainerHandle containerHdl =
> openContainerNW(tran, container_rlock,
> work.getContainerId());
> if (containerHdl == null)
> {
> tran.abort();
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace, " aborted " + work +
> " because container is locked or dropped");
> }
> }
> if (work.incrAttempts() < 3) // retry this for serveral
> times
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get container lock "
> +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
> the second is:
> RecordHandle headRecord = work.getHeadRowHandle();
> if (!container_rlock.lockRecordForWrite(
> tran, headRecord, false /* not insert */, false /* nowait */))
> {
> // cannot get the row lock, retry
> tran.abort();
> if (work.incrAttempts() < 3)
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get row lock " +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
> If working to get a reproduction for these cases, you can set
> derby.debug.true=DaemonTrace and look for "gave up" in the derby.log.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.