* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Tue, Apr 4, 2017 at 10:52 PM, Peter Eisentraut
> <peter.eisentr...@2ndquadrant.com> wrote:
> > On 4/3/17 11:32, Andres Freund wrote:
> >> That doesn't strike as particularly future proof.  We intentionally
> >> leave objects behind pg_regress runs, but that only works if we actually
> >> run them...
> >
> > I generally agree with the sentiments expressed later in this thread.
> > But just to clarify what I meant here:  We don't need to run a, say,
> > 1-minute serial test to load a few "left behind" objects for the
> > pg_upgrade test, if we can load the same set of objects using dedicated
> > scripting in say 2 seconds.  This would make both the pg_upgrade tests
> > faster and would reduce the hidden dependencies in the main tests about
> > which kinds of objects need to be left behind.
> Making the tests run shorter while maintaining the current code
> coverage is nice. But this makes more complicated the test suite
> maintenance as this needs either a dedicated regression schedule or an
> extra test suite where objects are created just for the sake of
> pg_upgrade. This increases the risks of getting a rotten test suite
> with the time if patch makers and reviewers are not careful.

I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.

I'm still not completely convinced that we actually need to
independently test pg_upgrade by creating all the objects which the
pg_dump TAP tests do, given that pg_upgrade just runs pg_dump
underneath.  If we really want to do that, however, what we should do is
abstract out the pg_dump set of tests into a place that both the pg_dump
and pg_upgrade TAP tests could use them to create all the types of
objects which are supported to perform their tests.



Attachment: signature.asc
Description: Digital signature

Reply via email to