Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane escribió:
>> Yeah ... so just go with a constant estimate of say 200 deletable tuples
>> per page?

> How about we use a constant estimate using the average tuple width code?

I think that's overthinking the problem.  The point here is mostly for
vacuum to not consume 512MB (or whatever you have maintenance_work_mem
set to) when vacuuming a ten-page table.  I think that if we
significantly increase the risk of having to make multiple index passes
on medium-size tables, we'll not be doing anyone any favors.

If we went with allocating MaxHeapTuplesPerPage slots per page (292 in
CVS HEAD), 512MB would correspond to a bit over 300,000 pages, and you'd
get memory savings for anything less than that.  But that's already a
2GB table --- do you want to risk multiple index passes because you were
chintzy with your memory allocation?

Ultimately, the answer for a DBA who sees "out of memory" a lot is to
reduce his maintenance_work_mem.  I don't think VACUUM should be trying
to substitute for the DBA's judgment.

BTW, if an autovac worker gets an elog(ERROR) on one table, does it die
or continue on with the next table?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to