Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Thanks for making clearly understandable my patch!
> We might want to call GetCheckpointProgress something
> else, though. It doesn't return the amount of progress made, but rather
> the amount of progress we should've made up to that point or we're in
> danger of not completing the checkpoint in time.
GetCheckpointProgress might be a bad name; It returns the progress we should
have done, not at that time. How about GetCheckpointTargetProgress?
> However, if we're
> already past checkpoint_write_percent at the beginning of the nap, I
> think we should clamp the nap time so that we don't run out of time
> until the next checkpoint because of sleeping.
Yeah, I'm thinking nap time to be clamped to (100.0 - ckpt_progress_at_nap_
start - checkpoint_sync_percent). I think excess of checkpoint_write_percent
is not so important here, so I care about only the end of checkpoint.
> In the sync phase, we sleep between each fsync until enough
> time/segments have passed, assuming that the time to fsync is
> proportional to the file length. I'm not sure that's a very good
> assumption. We might have one huge files with only very little changed
> data, for example a logging table that is just occasionaly appended to.
> If we begin by fsyncing that, it'll take a very short time to finish,
> and we'll then sleep for a long time. If we then have another large file
> to fsync, but that one has all pages dirty, we risk running out of time
> because of the unnecessarily long sleep. The segmentation of relations
> limits the risk of that, though, by limiting the max. file size, and I
> don't really have any better suggestions.
It is difficult to estimate fsync costs. We need additonal statistics to
do it. For example, if we record the number of write() for each segment,
we might use the value as the number of dirty pages in segments. We don't
have per-file write statistics now, but if we will have those information,
we can use them to control checkpoints more cleverly.
> Should we try doing something similar for the sync phase? If there's
> only 2 small files to fsync, there's no point sleeping for 5 minutes
> between them just to use up the checkpoint_sync_percent budget.
Hmmm... if we add a new parameter like kernel_write_throughput [kB/s] and
clamp the maximum sleeping to size-of-segment / kernel_write_throuput (*1),
we can avoid unnecessary sleeping in fsync phase. Do we want to have such
a new parameter? I think we have many and many guc variables even now.
I don't want to add new parameters any more if possible...
(*1) dirty-area-in-segment / kernel_write_throuput is better.
NTT Open Source Software Center
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at