Hi,

On 2014-01-20 15:39:33 -0300, Alvaro Herrera wrote:
> * The multixact_freeze_table_age value has been set to 5 million.
> I feel this is a big enough number that shouldn't cause too much
> vacuuming churn, while at the same time not leaving excessive storage
> occupied by pg_multixact/members, which amplifies the space used by the
> average number of member in each multi.

That seems to be *far* too low to me. In some workloads, remember we've
seen pg_controldata outputs with far high next multi than next xid, that
will cause excessive full table scans. I really think that we shouldn't
change the default for freeze_table_age for multis at all.
I think we should have a lower value for the vacuum_freeze_min_age
equivalent, but that's it.

> (A bit of math: each Xid uses 2 bits.  Therefore for the default 200
> million transactions of vacuum_freeze_table_age we use 50 million bytes,
> or about 27 MB of space, plus some room for per-page LSNs.  For each
> Multi we use 4 bytes in offset plus 5 bytes per member; if we consider 2
> members per multi in average, that totals 70 million bytes for the
> default multixact_freeze_table_age, so 66 MB of space.)

That doesn't seem sufficient cause to change the default to me.

> * I have named the parameters by simply replacing "vacuum" with
> "multixact".  I could instead have added the "multixact" word in the
> middle:
> vacuum_multixact_freeze_min_age
> but this doesn't seem an improvement.

I vote for the longer version. Right now you can get all relevant vacuum
parameters by grepping/searching for vacuum, we shouldn't give up on
that. If we consider vacuum_multixact_freeze_min_age to be too long, I'd
rather vote for replacing multixact by mxid or such.

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to