Gregory Stark wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:

How much memory would it save during VACUUM on a 1 billion row table
with 200 million dead rows? Would that reduce the number of cycles a
normal non-interrupted VACUUM would perform?

It would depend on how many dead tuples you have per-page. If you have a very
large table with only one dead tuple per page then it might even be larger.
But in the usual case it would be smaller.

FWIW, there's some unused bits in current representation, so it might actually be possible to design it so that it's never larger.

One optimization to the current structure, instead of switching to a bitmap, would be to store the block number just once for each block, followed by a variable length list of offsets. It'd complicate the binary search though.

Also note that it would have to be non-lossy.

Yep. Or actually, it might be useful to forget some dead tids if it allowed you to memorize a larger number of other dead tids. Hmm, what a weird thought :).

Another insight I had while thinking about this is that the dead tid list behaves quite nicely from a OS memory management point of view. In the 1st vacuum phase, the array is filled in sequence, which means that the OS can swap out the early parts of it and use the memory for buffer cache instead. In the index scan phase, it's randomly accessed, but if the table is clustered, it's in fact not completely random access. In the 2nd vacuum pass, the array is scanned sequentially again. I'm not sure how that works out in practice, but you might want to use a larger maintenance_work_mem than you'd think.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to