[EMAIL PROTECTED] (Csaba Nagy) writes:

>> > [...]
>> > There has to be a more linear way of handling this scenario.
>> 
>> So vacuum the table often.
>
> Good advice, except if the table is huge :-)

... Then the table shouldn't be designed to be huge.  That represents
a design error.

> Here we have for example some tables which are frequently updated but
> contain >100 million rows. Vacuuming that takes hours. And the dead row
> candidates are the ones which are updated again and again and looked up
> frequently...

This demonstrates that "archival" material and "active" data should be
kept separately.

They have different access patterns; kludging them into the same table
turns out badly.

> A good solution would be a new type of vacuum which does not need to
> do a full table scan but can clean the pending dead rows without
> that... I guess then I could vacuum really frequently those tables.

That's yet another feature that's on the ToDo list; the "Vacuum Space
Map."

The notion is to have lists of recently modified pages, and to
restrict VACUUM to those pages.  (Probably a special version of
VACUUM...)
-- 
output = reverse("moc.enworbbc" "@" "enworbbc")
http://www.ntlug.org/~cbbrowne/lisp.html
"As  I've gained  more  experience with  Perl  it strikes  me that  it
resembles Lisp in many ways, albeit Lisp as channeled by an awk script
on acid."  -- Tim Moore (on comp.lang.lisp)

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to