[ 
https://issues.apache.org/jira/browse/DERBY-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12673325#action_12673325
 ] 

Kathey Marsden commented on DERBY-4055:
---------------------------------------

I looked at one of the test cases that triggered my assertion,  P703.  In this 
case the test drops the container before commit.
if (segment != ContainerHandle.TEMPORARY_SEGMENT) {
                        t_util.t_dropContainer(t, segment, cid);        // 
cleanup
                }

                t.commit();

So I think that's why we give up.  I don't really see how I could emulate this 
in JDBC and it would be the lock case that I would want to trigger anyway.  So, 
I don't think this unit test gives me a clue how to reproduce the  "  gave up 
after 3 tries to get container lock " case with JDBC.






> Space may not be reclaimed if locks are not available after three retries 
> (code inspection)
> -------------------------------------------------------------------------------------------
>
>                 Key: DERBY-4055
>                 URL: https://issues.apache.org/jira/browse/DERBY-4055
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0
>            Reporter: Kathey Marsden
>         Attachments: derby.log.T_RawStoreFactoryWithAssert
>
>
> I don't have a reproduction for these cases but there are two places in 
> ReclaimSpaceHelper where reclamation will give up after three tries if it 
> cannot obtain the lock to reclaim the space.  The first code path is:
> ContainerHandle containerHdl = 
>                       openContainerNW(tran, container_rlock, 
> work.getContainerId());
>               if (containerHdl == null)
>               {
>                       tran.abort();
>                       if (SanityManager.DEBUG)
>             {
>                 if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
>                 {
>                     SanityManager.DEBUG(
>                         DaemonService.DaemonTrace, " aborted " + work + 
>                         " because container is locked or dropped");
>                 }
>             }
>                       if (work.incrAttempts() < 3) // retry this for serveral 
> times
>             {
>                               return Serviceable.REQUEUE;
>             }
>                       else
>             {
>                 // If code gets here, the space will be lost forever, and
>                 // can only be reclaimed by a full offline compress of the
>                 // table/index.
>                 if (SanityManager.DEBUG)
>                 {
>                     if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
>                     {
>                         SanityManager.DEBUG(
>                             DaemonService.DaemonTrace, 
>                             "  gave up after 3 tries to get container lock " 
> + 
>                             work);
>                     }
>                 }
>                               return Serviceable.DONE;
>             }
>               }       
> the second is:
>       RecordHandle headRecord = work.getHeadRowHandle();
>               if (!container_rlock.lockRecordForWrite(
>                 tran, headRecord, false /* not insert */, false /* nowait */))
>               {
>                       // cannot get the row lock, retry
>                       tran.abort();
>                       if (work.incrAttempts() < 3)
>             {
>                               return Serviceable.REQUEUE;
>             }
>                       else
>             {
>                 // If code gets here, the space will be lost forever, and
>                 // can only be reclaimed by a full offline compress of the
>                 // table/index.
>                 if (SanityManager.DEBUG)
>                 {
>                     if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
>                     {
>                         SanityManager.DEBUG(
>                             DaemonService.DaemonTrace, 
>                             "  gave up after 3 tries to get row lock " + 
>                             work);
>                     }
>                 }
>                               return Serviceable.DONE;
>             }
>               }
> If working to get a reproduction for these cases, you can set 
> derby.debug.true=DaemonTrace and look for "gave up" in the derby.log.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to