Claudio Freire wrote:
> On Thu, Feb 8, 2018 at 8:39 PM, Alvaro Herrera <> 
> wrote:

> During the process of developing the patch, I got seriously broken
> code that passed the tests nonetheless. The test as it was was very
> ineffective at actually detecting issues.
> This new test may be slow, but it's effective. That's a very good
> reason to make it slower, if you ask me.

OK, I don't disagree with improving the test, but if we can make it fast
*and* effective, that's better than slow and effective.

> > Another argument against the LOCK pg_class idea is that it causes an
> > unnecessary contention point across the whole parallel test group --
> > with possible weird side effects.  How about a deadlock?
> The real issue with lock pg_class is that locks on pg_class are
> short-lived, so I'm not waiting for whole transactions.


> > Other than the wait loop I proposed, I think we can make a couple of
> > very simple improvements to this test case to avoid a slowdown:
> >
> > 1. the DELETE takes about 1/4th of the time and removes about the same
> > number of rows as the one using the IN clause:
> >   delete from vactst where random() < 3.0 / 4;
> I did try this at first, but it causes random output, so the test
> breaks randomly.

OK.  Still, your query seqscans the table twice.  Maybe it's possible to
use a CTID scan to avoid that, but I'm not sure how.

> > 3. Figure out the minimum size for the table that triggers the behavior
> >    you want.  Right now you use 400k tuples -- maybe 100k are sufficient?
> >    Don't know.
> For that test, I need enough *dead* tuples to cause several passes.
> Even small mwm settings require tons of tuples for this. In fact, I'm
> thinking that number might be too low for its purpose, even. I'll
> re-check, but I doubt it's too high. If anything, it's too low.


Álvaro Herrera      
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to