Alex,

> > So the number of individual pages read would equal the number pages in
> the database, which means that the number of dirty pages writes should be
> less than or equal the number db pages.
> 
> Sean, take into account that for each deleted record version which changes
> (deletes) index key we should modify an index block. If records are not
> sorted in database (I know that in Karol's case they seem to be sorted, but
> now I talk about generic case) for each deleted record version we need
> another index block. We must find it in a cache and if it's missing (which is
> quite possible if indexes of current table have size 4 times bigger than 
> cache),
> we must read it from disk. But if we have only dirty pages in cache (which is
> also very likely in such a case), some page should be written. With indexes
> size N times bigger than cache only 1/N of deleted record versions will not
> cause page write in case of random records distribution in database.

Ok, now I see the problem... but it seems that the current approach is less 
than optimal for a sweep process.

To be clear, you are saying that if a row as an index on Field A which has 4 
record versions (3 which can be dropped), and the value of the index for those 
versions where 1, 2, 3 and 4.  When the sweep encounters the row, when it 
"scrubs" version 1, if reads the index and tries to find the version 1 entry 
and clean it at the same time.  And so on. Correct?


Sean


------------------------------------------------------------------------------
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to