On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
<funny.fal...@postgrespro.ru> wrote:
> On 2017-07-24 19:11, Claudio Freire wrote:
>> On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
>> <funny.fal...@postgrespro.ru> wrote:
>>> Good day, Claudio
>>> On 2017-07-22 00:27, Claudio Freire wrote:
>>>> On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
>>>> <funny.fal...@postgrespro.ru> wrote:
>>>>> My friend noticed, that I didn't said why I bother with autovacuum.
>>>>> Our customers suffers from table bloating. I've made synthetic
>>>>> bloating test, and started experiments with modifying micro- and
>>>>> auto-vacuum. My first attempts were to update FSM early (both in
>>>>> micro and autovacuum) and update it upto root, not only low level.
>>>> This FSM thing is probably not a bad idea as well.
>>>> We're forced to run regular manual vacuums because for some tables
>>>> autovacuums seems to never be enough, no matter how it's configured,
>>>> mostly because it gets canceled all the time. These are high-churn,
>>>> huge tables, so vacuuming them takes hours or days, there's always
>>>> someone with a conflicting lock at some point that ends up canceling
>>>> the autovacuum task.
>>>> The above paragraph triggered me to go check, and it seems in those
>>>> cases the FSM never gets vacuumed. That's probably not a good thing,
>>>> but I don't see how to vacuum the FSM after a cancel. So vacuuming the
>>>> FSM from time to time during long-running vacuums seems like a good
>>>> idea at this point.
>>> Attached patch changes fsm update: instead of updating only lowest
>>> level, it propagates space increase up to root.
>>> It slows autovacuum a bit, so that I didn't propose it together with
>>> ring buffer increase.
>> I was mostly thinking about something like the attached patch.
>> Simple, unintrusive, and shouldn't cause any noticeable slowdown.
> Your change is small, clear, and currently useful for huge tables under
> high update load (until "allowing vacuum to use more than 1GB memory"
> is merged).

In high-bloat conditions, it doesn't take long to accumulate 1GB of
dead tuples (which is about 178M tuples, btw).

The index scan takes way longer than the heap scan in that case.

> But it still delays updating fsm until whole first batch of dead tuples
> cleared (ie all indices scanned, and all heap pages cleared), and on such
> huge table it will be hours.

So, true, it will get delayed considerably. But as you realized,
there's not much point in trying to vacuum the FSM sooner, since it
won't be accurate shortly afterwards anyway. Dead line pointers do use
up a fair bit of space, especially on narrow tables.

In a particular table I have that exhibits this problem, most of the
time is spent scanning the index. It performs dozens of index scans
before it's done, so it would vacuum the FSM quite often enough, even
if I were to increase the mwm setting n-fold.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to