comments below

Raymond Raymond wrote:
At the end of last year, I discussed something about how to automatically
decide the checkpoint interval with Oystein and Mike. Now, I am trying to
implement it. As what we discussed before, I write an outline of what I am
going to do.

 1. Let the users to set a certain acceptable recovery time
     that Derby should try to satisfy.(We will give an appropriate
     default value).
This is a new knob for a zero-admin db, where I would prefer making less
knobs.  Having said that this knob makes more sense to users than the
current "amount of log file" knob.

 2. During initilization of Derby, we run some measurement that
    determines the performance of the system and maps the
    recovery time into some X megabytes of log.
What do you mean by initialization? Once per boot, once per db creation, something else? I worry about startup time for something that
is done too often.  I actually think something similar could make our
optimizer estimates more acurate by tuning the returns to the machine they are running on, but did not want to slow the system unnecessarily.

 3. establish a dirty page list in wich dirty pages are sorted in ascending
     order of the time when they were firt updated. When one dirty page
is flushed out to disk, it will be released from the link.(this step needs
     further discussion,whether we need to establish such a list)
I am not convinced of a need for such a list, and do not want to see such a list slow down non-checkpoint related activity. From other reported Derby issues it is clear we actually want to "slow" the
checkpoint down rather than optimize it.  The downside with the current
algorithm is that a page that is made dirty after the checkpoint starts
will be written, and if it gets written again before the next checkpoint
we have done 1 too many I/O's.  I think I/O optimization may benefit
more by working on optimizing the background I/O thread than working
on the checkpoint.

 4. A checkpoint is made and controled in combined consideration of
     -the acceptable log length which we get in step 2
     -the current IO performance
 5. We do increamental checkpoint.That means:
     From the beginning of the dirty page list established in step 3,(the
earliest updated dirty page), to the end of the list (the latest updated dirty page), we do checkpoint. If data reads or a log writes (if log in default location) start to have longer response times then a appropriate value,we pause the checkpoint process and update the log control file to let derby know where we are.When the data reads or log writes time return to
     acceptable value, we continue to do checkpoint.
This sounds like you are looking to address DERBY-799.  I though Oystein
was going to work on this, but there is no owner so I am not sure.  You
may at least want to consult with him on his findings.

This is just an outline. I would like to discuss details about them with everyone
later.If anyone has any suggestion, please let me know.

Now, I am going to design the 2nd step first to map the recovery time into some X megabytes of log. A simple approach is that we can design a test log file. In the log file, we can let derby create a temporary database and do a bunch of test to get necessary disk IO information, and then delete the temporary database. When derby boots up, we let it to do recovery from the test log file.Anyone has some other
suggestions on it?
I'll think about this, it is not straight forward. My guess would be that recovery time is dominated by 2 factors:
1) I/O from log disk
2) I/O from data disk

Item 1 is pretty easy to get a handle on.  During redo it is pretty much
a straight scan from beginning to end doing page based I/O. Undo is harder as it jumps back and forth for each xact. I would probably just
ignore it for estimates.

Item 2 is totally dependent on cache rate hit you are going to expect, and the number of log records. The majority of log records deal with a single page, it will read the page into cache if it doesn't exist and then it will do a quick operation on that page. Again undo is slightly more complicated as it
could involve logical lookups in the index.

Another option rather than do any sort of testing is to come up with an
initial default time based on size of log file. And then on each subsequent recovery event dynamically change the estimate based on how long that recovery on that db took. This way each estimate will be based on the actual work generated by the application, and over time
should become better and better.


I am wondering do I need to establish some relationship between the data reads time and the data writes time. I mean, under a certain average data reads time, approximately how long would the average data writes time be.Since what we get from step2 is jusn under a certain system condition,when the system condition changes(becomes busier), the value should change too. If I can establish such a relationship,then I can make
acurate adjustment on the checkpoint process.




Raymond

_________________________________________________________________
Scan and help eliminate destructive viruses from your inbound and outbound e-mail and attachments. http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines Start enjoying all the benefits of MSNĀ® Premium right now and get the first two months FREE*.




Reply via email to