On Mon, Feb 20, 2017 at 3:01 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 20 February 2017 at 09:15, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> On Mon, Feb 20, 2017 at 7:26 AM, Masahiko Sawada <sawada.m...@gmail.com> 
>> wrote:
>>> On Fri, Feb 17, 2017 at 3:41 AM, Robert Haas <robertmh...@gmail.com> wrote:
>>>> On Thu, Feb 16, 2017 at 6:17 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>>>> On 15 February 2017 at 08:07, Masahiko Sawada <sawada.m...@gmail.com> 
>>>>> wrote:
>>>>>> It's a bug. Attached latest version patch, which passed make check.
>>>>> 2. The current btree vacuum code requires 2 vacuums to fully reuse
>>>>> half-dead pages. So skipping an index vacuum might mean that second
>>>>> index scan never happens at all, which would be bad.
>>>> Maybe.  If there are a tiny number of those half-dead pages in a huge
>>>> index, it probably doesn't matter.  Also, I don't think it would never
>>>> happen, unless the table just never gets any more updates or deletes -
>>>> but that case could also happen today.  It's just a matter of
>>>> happening less frequently.
>> Yeah thats right and I am not sure if it is worth to perform a
>> complete pass to reclaim dead/deleted pages unless we know someway
>> that there are many such pages.
> Agreed.... which is why
> On 16 February 2017 at 11:17, Simon Riggs <si...@2ndquadrant.com> wrote:
>> I suggest that we store the number of half-dead pages in the metapage
>> after each VACUUM, so we can decide whether to skip the scan or not.
>> Also, I think we do reclaim the
>> complete page while allocating a new page in btree.
> That's not how it works according to the README at least.

I am referring to code (_bt_getbuf()->if (_bt_page_recyclable(page))),
won't that help us in reclaiming the space?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to