On Mon, Jan 22, 2007 at 12:17:39PM -0800, Ron Mayer wrote: > Gregory Stark wrote: > > > > Actually no. A while back I did experiments to see how fast reading a file > > sequentially was compared to reading the same file sequentially but skipping > > x% of the blocks randomly. The results were surprising (to me) and > > depressing. > > The breakeven point was about 7%. [...] > > > > The theory online was that as long as you're reading one page from each disk > > track you're going to pay the same seek overhead as reading the entire > > track. > > Could one take advantage of this observation in designing the DSM? > > Instead of a separate bit representing every page, having each bit > represent 20 or so pages might be a more useful unit. It sounds > like the time spent reading would be similar; while the bitmap > would be significantly smaller.
If we extended relations by more than one page at a time we'd probably have a better shot at the blocks on disk being contiguous and all read at the same time by the OS. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate