Hi,

On 2017-04-05 10:40:41 -0400, Stephen Frost wrote:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > Stephen Frost <sfr...@snowman.net> writes:
> > > I believe that what Peter was getting at is that the pg_dump TAP tests
> > > create a whole slew of objects in just a few seconds and are able to
> > > then exercise those code-paths in pg_dump, without needing to run the
> > > entire serial regression test run.
> > 
> > Right.  But there's a certain amount of serendipity involved in using the
> > core regression tests' final results.  For example, I don't know how long
> > it would've taken us to understand the problems around dumping and
> > reloading child tables with inconsistent column orders, had there not been
> > examples of that in the regression tests.  I worry that creating a sterile
> > set of objects for testing pg_dump will leave blind spots, because it will
> > mean that we only test cases that we explicitly created test cases for.
> 
> We don't need to only create sterile sets of objects in the pg_dump TAP
> tests.

I really, really don't understand why we're conflating making pg_upgrade
tests less fragile / duplicative with changing what we use to test it.
This seems to have the sole result that we're not going to get anywhere.


> I don't believe we need to populate GIN indexes or vacuum them
> to test pg_dump/pg_upgrade either (at least, not if we're going to stick
> to the pg_upgrade test basically being if pg_dump returns the same
> results before-and-after).

I think we *should* have populated GIN indexes. Yes, the coverage isn't
perfect, but the VACUUM definitely gives a decent amount of coverage
whether the gin index looks halfway sane after the upgrade.


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to