Hi all, On Wed, Dec 17, 2025 at 9:09 PM Melanie Plageman <[email protected]> wrote: > > On Tue, Dec 16, 2025 at 11:46 PM Soumya S Murali > <[email protected]> wrote: > > > > Thank you for the clarification. > > I understand the concern, and apologies for the earlier confusion. My > > intention is to stay aligned with the direction of your v11 series, so > > this patch contains only small, incremental changes intended to be > > easy to review and not conflict with ongoing work. > > My previous message was only meant to summarize logic I was > > experimenting with locally, not to propose it as a final patch. Based > > on the feedback, I revisited the implementation, made the necessary > > modifications, and verified the updated logic. > > I am attaching the patch for review. It has been tested with make > > check and make -C src/test/isolation check. > > Thank you again for the guidance. I hope this is helpful for the > > ongoing work, and I am looking forward to further feedback. > > I took a look at your patch and the first two parts were already fixed > in a previous version > > --- a/src/backend/storage/buffer/bufmgr.c > +++ b/src/backend/storage/buffer/bufmgr.c > @@ -4405,7 +4405,7 @@ BufferNeedsWALFlush(BufferDesc *bufdesc, XLogRecPtr > *lsn) > UnlockBufHdr(bufdesc); > > /* Skip all WAL flush logic if relation is not logged */ > - if (!(*lsn != InvalidXLogRecPtr)) > + if (!XLogRecPtrIsValid(*lsn)) > return false; > > > @@ -4433,6 +4433,12 @@ CleanVictimBuffer(BufferDesc *bufdesc, bool > from_ring, IOContext io_context) > content_lock = BufHdrGetContentLock(bufdesc); > LWLockAcquire(content_lock, LW_SHARED); > > + if (!PrepareFlushBuffer(bufdesc, &max_lsn)) > + { > + LWLockRelease(content_lock); > + return; > + } > + > > And I don't understand the last parts of the diff. > > @@ -4440,19 +4446,14 @@ CleanVictimBuffer(BufferDesc *bufdesc, bool > from_ring, IOContext io_context) > LWLockRelease(content_lock); > LWLockAcquire(content_lock, LW_EXCLUSIVE); > > - /* > - * Mark the buffer ready for checksum and write. > - */ > - PrepareBufferForCheckpoint(bufdesc, &max_lsn); > + if (!PrepareFlushBuffer(bufdesc, &max_lsn)) > + { > + LWLockRelease(content_lock); > + return; > + } > > /* Release exclusive lock; the batch will write the page later */ > LWLockRelease(content_lock); > - > - /* > - * Add LSN to caller's batch tracking. > - * Caller handles XLogFlush() using highest LSN. > - */ > - PrepareBufferForCheckpoint(bufdesc, max_lsn); > } > > AFAIK, there has never been a function called > PrepareBufferForCheckpoint() and my current patchset doesn't have it > either. > > - Melanie
Thank you for reviewing the patch and for the clarification. It seems I was working on the v11 patch version that I had received from November, and I did not have the newer fixes that already addressed the XLogRecPtrIsValid() check and the shared-lock early return case in CleanVictimBuffer(). That's why the first part of my changes duplicated the work that has already been fixed. Regarding the latter part of the diff, the reason I attempted that change was that in my local tree the code was still performing both WAL–LSN read + dirty check inside the exclusive lock followed by marking/queuing the buffer for write. And I initially interpreted that section as functionally similar to the description of PrepareBufferForCheckpoint() from the early discussion in the thread. Because I didn't have the newer version of the patchset where this function was removed or renamed, I refactored it into a second PrepareFlushBuffer() call to ensure correctness on re-checking after lock upgrade and to keep failure paths symmetrical (especially unlock and early return). I now understand that this was based on an outdated context and why it conflicts with the current patch flow. So I will rebase my work on the latest version of the patch and will drop that section to avoid misalignment. If there are any specific areas where I can help align with the ongoing work, please let me know I’m happy to continue contributing in the direction you are taking the patchset. Regards, Soumya
