On Tue, Feb 21, 2017 at 1:09 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 20 February 2017 at 10:27, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> On Mon, Feb 20, 2017 at 3:01 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> On 20 February 2017 at 09:15, Amit Kapila <amit.kapil...@gmail.com> wrote:
>>>> On Mon, Feb 20, 2017 at 7:26 AM, Masahiko Sawada <sawada.m...@gmail.com> 
>>>> wrote:
>>>>> On Fri, Feb 17, 2017 at 3:41 AM, Robert Haas <robertmh...@gmail.com> 
>>>>> wrote:
>>>>>> On Thu, Feb 16, 2017 at 6:17 AM, Simon Riggs <si...@2ndquadrant.com> 
>>>>>> wrote:
>>>>>>> On 15 February 2017 at 08:07, Masahiko Sawada <sawada.m...@gmail.com> 
>>>>>>> wrote:
>>>>>>>> It's a bug. Attached latest version patch, which passed make check.
>>>>>>> 2. The current btree vacuum code requires 2 vacuums to fully reuse
>>>>>>> half-dead pages. So skipping an index vacuum might mean that second
>>>>>>> index scan never happens at all, which would be bad.
>>>>>> Maybe.  If there are a tiny number of those half-dead pages in a huge
>>>>>> index, it probably doesn't matter.  Also, I don't think it would never
>>>>>> happen, unless the table just never gets any more updates or deletes -
>>>>>> but that case could also happen today.  It's just a matter of
>>>>>> happening less frequently.
>>>> Yeah thats right and I am not sure if it is worth to perform a
>>>> complete pass to reclaim dead/deleted pages unless we know someway
>>>> that there are many such pages.
>>> Agreed.... which is why
>>> On 16 February 2017 at 11:17, Simon Riggs <si...@2ndquadrant.com> wrote:
>>>> I suggest that we store the number of half-dead pages in the metapage
>>>> after each VACUUM, so we can decide whether to skip the scan or not.
>>>> Also, I think we do reclaim the
>>>> complete page while allocating a new page in btree.
>>> That's not how it works according to the README at least.
>> I am referring to code (_bt_getbuf()->if (_bt_page_recyclable(page))),
>> won't that help us in reclaiming the space?
> Not unless the README is incorrect, no.

Just to ensure that we both have the same understanding, let me try to
write what I understand about this reclaim algorithm.  AFAIU, in the
first pass vacuum will mark the half dead pages as Deleted and in the
second pass, it will record such pages as free in FSM so that they can
be reused as new pages when the indexam asked for a new block instead
of extending the index relation.  Now, if we introduce this new GUC,
then there are chances that sometimes we skip the second pass where it
would not have been skipped.

Note that we do perform the second pass in the same vacuum cycle when
index has not been scanned for deleting the tuples as per below code:

if (stats == NULL)
stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
btvacuumscan(info, stats, NULL, NULL, 0);

In above code stats won't be NULL, if the vacuum has scanned index for
deleting tuples (btbulkdelete).  So, based on this I think it will
skip scanning the index (or recycling pages marked as deleted) in the
second vacuum only when there are no dead tuple removals in that
vacuum.  Do we agree till here?
I understand that there could be some delay in reclaiming dead pages
but do you think it is such a big deal that we completely scan the
index for such cases or even try to change the metapage format?

> That section of code is just a retest of pages retrieved from FSM;

Yes, I think you are right.  In short, I agree that only vacuum can
reclaim half-dead pages.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to