Daniel John Debrunner wrote:

Suresh Thalamati (JIRA) wrote:

[ http://issues.apache.org/jira/browse/DERBY-239?page=comments#action_12316434 ]
Suresh Thalamati commented on DERBY-239:
----------------------------------------

[comments just on this issue]

b) read each pages in to the page cache first and then  latch  the
  page in the cache until a  temporary copy of it is made. This approach
does not have extra overhead of extra latches on the page keys during writes , but will pollute the page cache with the pages that are only required by the backup; this might have impact on user operations because active user pages may have been replaced by the backup pages in the page cache. or
c) read pages into buffer pool and latch them while making a copy  similar to
the above approach, but some how make sure that user pages are not kicked out of the buffer pool.

I think b) is the correct approach, but what is 'buffer pool' in c)?

I think modifications to the cache would be useful for b), so that
entries in the cache (through generic apis, not specific to store) could
mark how "useful/valuable" they are. Just a simple scheme, lower numbers
less valuable, higher numbers more valuable, and if it makes it easier
to fix a range, e.g. 0-100, then that would be ok. Then the store could
added pages to the cache with this weighting, e.g. (to get the general idea)

    pages for backup - weight 0
    overflow column pages - weight 10
    regular pages - weight 20
    leaf index pages - weight 30
     root index pages 80

This weight would then be factored into the decision to throw pages out
or not.

This project could be independent of the online backup and could have
benfits elsewhere.

Dan.





"buffer pool" is hangover from my old job, I meant "page cache" in option c). I also think b) is the correct approach,
if  the cache  can be   enhanced as you  described.


Thanks
-suresht


Reply via email to