You are seeing the same results as Oystein reported as part of
DERBY-799. If no one submits a patch for DERBY-799 I will probably
submit something simple before the next candidate release is
cut. Something on the order of sleeping in between I/O's to
stop checkpoint from flooding I/O.
Kristian
Mike Matrigali wrote:
Long answer above, some comments inline below.
I think runtime performance would be optimal in this case, runtime
performance is in no way "helped" by having checkpoints - only either
not affected or hindered. As has been noted checkpoints can cause
drastic downward
I think this is the right path, though would need more details:
o does boot mean first time boot for each db?
o how to determine this machine
o and the total time to run such a test.
There are some very quick and useful tests that would be fine to
add to the default system and do one time per
Long answer above, some comments inline below.
I think runtime performance would be optimal in this case, runtime
performance is in no way helped by having checkpoints - only either
not affected or hindered. As has been noted checkpoints can cause
drastic downward spikes in some disk bound
Mike,
Last time we discussed about how to map the recovery time into Xmb of log.
I have been thinking on it recently and have a proposal.
How about when the very first time derby boots (not every time) on a
certain
machine, we let the user to chose whether he (or she) want to do some
Mike, last time you gave me some comments about how to map the
recovery time into Xmb of log. I still have some question about it.
RR2. During initilization of Derby, we run some measurement that
RR determines the performance of the system and maps the
RR recovery time into some
Raymond Raymond wrote:
Mike, last time you gave me some comments about how to map the
recovery time into Xmb of log. I still have some question about it.
RR2. During initilization of Derby, we run some measurement that
RR determines the performance of the system and maps the
RR
Mike, thank you for you comments. They really help me a lot. I would
like to make more discussion on the issue.
RR2. During initilization of Derby, we run some measurement that
RR determines the performance of the system and maps the
RR recovery time into some X megabytes of log.
At the end of last year, I discussed something about how to automatically
decide the checkpoint interval with Oystein and Mike. Now, I am trying to
implement it. As what we discussed before, I write an outline of what I am
going to do.
1. Let the users to set a certain acceptable recovery time