Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm not having much luck really.  I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > misestimation and extra index cleanup passes happen, which is a bad
> > thing.
> 
> Yeah ... so just go with a constant estimate of say 200 deletable tuples
> per page?

How about we use a constant estimate using the average tuple width code?

-- 
Alvaro Herrera                 http://www.amazon.com/gp/registry/CTMLCN8V17R4
"In fact, the basic problem with Perl 5's subroutines is that they're not
crufty enough, so the cruft leaks out into user-defined code instead, by
the Conservation of Cruft Principle."  (Larry Wall, Apocalypse 6)

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to