Andres Freund <and...@2ndquadrant.com> writes:
> On 2014-11-19 19:59:33 +0200, Heikki Linnakangas wrote:
>> This grew "pg_dumpall | wc -c" from 5505689 to 6926066 bytes. The size of
>> the regression database grew, according to psql's "\l+" command grew from 45
>> MB to 57 MB. The amount of WAL generated by "make installcheck" grew from 75
>> MB to 104 MB.

> Why not just drop some of these slightly larger tables after the test?
> Then the maximum size of the database/the size of the dump doesn't
> increase as much? I don't think there's these are that interesting to
> look into after the test has finished.

While the post-run database size is interesting, I think the peak in-run
size is probably the more critical number, since that affects whether
buildfarm critters with limited disk space are going to run out.

Still, I agree that we could drop any such tables that don't add new
cases for pg_dump to think about.

Another thought, if we drop large tables at completion of their tests,
is that these particular tests ought not be run in parallel with each
other.  That's just uselessly concentrating the peak space demand.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to