On Fri, Feb 2, 2018 at 11:13 PM, Claudio Freire <klaussfre...@gmail.com> wrote:
> On Thu, Feb 1, 2018 at 9:34 PM, Masahiko Sawada <sawada.m...@gmail.com> wrote:
>> On Mon, Jan 29, 2018 at 11:31 PM, Claudio Freire <klaussfre...@gmail.com> 
>> wrote:
>>> On Mon, Jan 29, 2018 at 4:12 AM, Masahiko Sawada <sawada.m...@gmail.com> 
>>> wrote:
>>>> On Sat, Jul 29, 2017 at 9:42 AM, Claudio Freire <klaussfre...@gmail.com> 
>>>> wrote:
>>>>> Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids
>>>>> recursing into branches that already contain enough free space, to
>>>>> avoid having to traverse the whole FSM and thus induce quadratic
>>>>> costs. Intermediate FSM vacuums are only supposed to make enough
>>>>> free space visible to avoid extension until the final (non-partial)
>>>>> FSM vacuum.
>>>> Hmm, I think this resolve a  part of the issue. How about calling
>>>> AutoVacuumRequestWork() in PG_CATCH() if VACOPT_VACUUM is specified
>>>> and give the relid that we were vacuuming but could not complete as a
>>>> new autovacuum work-item? The new autovacuum work-item makes the
>>>> worker vacuum FSMs of the given relation and its indices.
>>> Well, I tried that in fact, as I mentioned in the OP.
>>> I abandoned it due to the conjunction of the 2 main blockers I found
>>> and mentioned there. In essence, those issues defeat the purpose of
>>> the patch (to get the free space visible ASAP).
>>> Don't forget, this is aimed at cases where autovacuum of a single
>>> relation takes a very long time. That is, very big relations. Maybe
>>> days, like in my case. A whole autovacuum cycle can take weeks, so
>>> delaying FSM vacuum that much is not good, and using work items still
>>> cause those delays, not to mention the segfaults.
>> Yeah, I agree to vacuum fsm more frequently because it can prevent
>> table bloating from concurrent modifying. But considering the way to
>> prevent the table bloating from cancellation of autovacuum, I guess we
>> need more things. This proposal seems to provide us an ability that is
>> we "might be" able to prevent table bloating due to cancellation of
>> autovacuum. Since we can know that autovacuum is cancelled, I'd like
>> to have a way so that we can surely vacuum fsm even if vacuum is
>> cancelled. Thoughts?
> After autovacuum gets cancelled, the next time it wakes up it will
> retry vacuuming the cancelled relation. That's because a cancelled
> autovacuum doesn't update the last-vacuumed stats.
> So the timing between an autovacuum work item and the next retry for
> that relation is more or less an autovacuum nap time, except perhaps
> in the case where many vacuums get cancelled, and they have to be
> queued.

I think that's not true if there are multiple databases.

> That's why an initial FSM vacuum makes sense. It has a similar timing
> to the autovacuum work item, it has the benefit that it can be
> triggered manually (manual vacuum), and it's cheap and efficient.
>> Also the patch always vacuums fsm at beginning of the vacuum with a
>> threshold but it is useless if the table has been properly vacuumed. I
>> don't think it's good idea to add an additional step that "might be"
>> efficient, because current vacuum is already heavy.
> FSMs are several orders of magnitude smaller than the tables
> themselves. A TB-sized table I have here has a 95M FSM. If you add
> threshold skipping, that initial FSM vacuum *will* be efficient.
> By definition, the FSM will be less than 1/4000th of the table, so
> that initial FSM pass takes less than 1/4000th of the whole vacuum.
> Considerably less considering the simplicity of the task.

I agree the fsm is very smaller than heap and vacuum on fsm will not
be comparatively heavy but I'm afraid that the vacuum will get more
heavy in the future if we pile up such improvement that are small but
might not be efficient. For example, a feature for reporting the last
vacuum status has been proposed[1]. I wonder if we can use it to
determine whether we do the fsm vacuum at beginning of vacuum.

>> On detail of the patch,
>> --- a/src/backend/storage/freespace/indexfsm.c
>> +++ b/src/backend/storage/freespace/indexfsm.c
>> @@ -70,5 +70,5 @@ RecordUsedIndexPage(Relation rel, BlockNumber usedBlock)
>>  void
>>  IndexFreeSpaceMapVacuum(Relation rel)
>>  {
>> -  FreeSpaceMapVacuum(rel);
>> +  FreeSpaceMapVacuum(rel, 0);
>>  }
>> @@ -816,11 +820,19 @@ fsm_vacuum_page(Relation rel, FSMAddress addr,
>> bool *eof_p)
>>       {
>>          int         child_avail;
>> +        /* Tree pruning for partial vacuums */
>> +        if (threshold)
>> +        {
>> +           child_avail = fsm_get_avail(page, slot);
>> +           if (child_avail >= threshold)
>> +              continue;
>> +        }
>> Don't we skip all fsm pages if we set the threshold to 0?
> No, that doesn't skip anything if threshold is 0, since it doesn't get
> to the continue, which is the one skipping nodes.

Oops, yes you're right. Sorry for the noise.



Masahiko Sawada
NTT Open Source Software Center

Reply via email to