"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> At some point it might make sense to convert the FSM into a bitmap; that
> way everything just scales with database size.
> In the meantime, I'm not sure if it makes sense to tie the FSM size to
> the DSM size, since each FSM page requires 48x the storage of a DSM
> page. I think there's also a lot of cases where FSM size will not scale
> the same was DSM size will, such as when there's historical data in the
Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.
If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared
memory by now. However, 6 bytes/page is small enough for normal use. We need
to reconsider it if we would go into TB class heavily updated databases.
> That raises another question... what happens when we run out of DSM
First, discard completely clean memory chunks in DSM. 'Clean' means all of
the tuples managed by the chunk are frozen. This is a lossless transition.
Second, discard tracked tables and its chunks that is least recently
vacuumed. We can assume those tables have many dead tuples and almost
fullscan will be required. We don't bother to keep tracking to such tables.
Many optimizations should still remain at this point, but I'll make
a not-so-complex suggestions in the meantime.
NTT Open Source Software Center
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster