On Mon, Oct 10, 2016 at 4:51 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:
> On 10/5/16 9:58 AM, Francisco Olarte wrote:
>> Is the system catalog a bottleneck for people who has real use for
>> paralell vacuum? I mean, to me someone who does this must have a very
>> big db on a big iron. If that does not consist of thousands and
>> thousands of smallish relations, it will normally be some very big
>> tables and a much smaller catalog.
> Not necessarily. Anyone that makes extensive use of temp tables can end up
> with a very large (and bloated) pg_attribute. AFAIK you can actually create
> "temp" versions of any object that lives in a schema by specifying pg_temp
> as the schema, but in practice I don't think you'll really see anything
> other than pg_attribute get really large. So it would be nice if
> pg_attribute could be done in parallel, but I suspect it's one of the
> catalog tables that could be causing these problems.

This I see, but if you crunch on temp tables I'm not sure you should
do full vacuum on the catalog ( I fear full catalog vacuum is going to
lock DDL, and this kind of situation is better served by autovacuum
maintaning free space in the catalog so it gets to an stable size ).

I do not think it is neccessary to make every operation as fast as
possible, I prefer a simpler system. I feel someone having a multi
terabyte database which needs full vacuums due to its use patterns, in
paralell, and also crunchs on a lots of temporarary tables, which a
strange use pattern which mandates full vacuums of pg_attribut ( I
could concoct a situtation for these, but not easily ) is a
specialized power DBA which should be able to easily script vacuum
tasks taking into account the usage pattern much better than any
reasonable simple alternative. After all -t is there and can be
repeated for something.

Francisco Olarte.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to