Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-19 Thread Heikki Linnakangas

Jim Nasby wrote:

On Jun 17, 2007, at 4:39 AM, Simon Riggs wrote:

pg_start_backup() should be a normal checkpoint I think. No need for
backup to be an intrusive process.


Good point. A spread out checkpoint can take a long time to finish,
though. Is there risk for running into a timeout or something if it
takes say 10 minutes for a call to pg_start_backup to finish?


That would be annoying, but the alternative is for backups to seriously
effect performance, which would defeat the object of the HOT backup.
It's not like its immediate right now, so we'd probably be moving from
2-3 mins to 10 mins in your example. Most people are expecting their
backups to take a long time anyway, so thats OK.


We should document it, though; otherwise I can see a bunch of confused 
users wondering why pg_start_backup takes so long. Remember that with 
longer checkpoints, the odds of them calling pg_start_backup during one 
and having to wait are much greater.


If pg_start_backup initiates a non-immediate, smoothed checkpoint, how 
about a checkpoint that's in progress when pg_start_backup is called? 
Should that be hurried, so we can start the backup sooner? Probably not, 
which means we'll need yet another mode to RequestCheckpoint: request a 
non-immediate checkpoint, but if there's a checkpoint already running, 
don't rush it.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-19 Thread Jim Nasby

On Jun 17, 2007, at 4:39 AM, Simon Riggs wrote:

pg_start_backup() should be a normal checkpoint I think. No need for
backup to be an intrusive process.


Good point. A spread out checkpoint can take a long time to finish,
though. Is there risk for running into a timeout or something if it
takes say 10 minutes for a call to pg_start_backup to finish?


That would be annoying, but the alternative is for backups to  
seriously

effect performance, which would defeat the object of the HOT backup.
It's not like its immediate right now, so we'd probably be moving from
2-3 mins to 10 mins in your example. Most people are expecting their
backups to take a long time anyway, so thats OK.


We should document it, though; otherwise I can see a bunch of  
confused users wondering why pg_start_backup takes so long. Remember  
that with longer checkpoints, the odds of them calling  
pg_start_backup during one and having to wait are much greater.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-17 Thread Heikki Linnakangas

Simon Riggs wrote:

XLogCtl->LogwrtRqst.Write is updated every time we insert an xlog record
that advances to a new page. It isn't exactly up to date, but it lags
behind by no more than a page.


Oh, ok. That would work just fine then.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-17 Thread Simon Riggs
On Sun, 2007-06-17 at 12:00 +0100, Heikki Linnakangas wrote:
> Simon Riggs wrote:
> > On Sun, 2007-06-17 at 08:51 +0100, Heikki Linnakangas wrote:
> >>> Do we need to know it so exactly that we look
> >>> at WALInsertLock? Maybe use info_lck to request the latest page, since
> >>> that is less heavily contended and we need never wait across I/O.
> >> Is there such a value available, that's protected by just info_lck? I 
> >> can't see one.
> > 
> > XLogCtl->LogwrtRqst.Write
> 
> That's the Write location. checkpoint_segments is calculated against the 
> Insert location. In a normal OLTP scenario they would be close to each 
> other, but if you're doing a huge data load in a transaction; restoring 
> from backup for example, they could be really far apart.

XLogCtl->LogwrtRqst.Write is updated every time we insert an xlog record
that advances to a new page. It isn't exactly up to date, but it lags
behind by no more than a page.

LogwrtRqst and LogwrtResult may differ substantially in the situation
you mention.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-17 Thread Heikki Linnakangas

Simon Riggs wrote:

On Sun, 2007-06-17 at 08:51 +0100, Heikki Linnakangas wrote:

Do we need to know it so exactly that we look
at WALInsertLock? Maybe use info_lck to request the latest page, since
that is less heavily contended and we need never wait across I/O.
Is there such a value available, that's protected by just info_lck? I 
can't see one.


XLogCtl->LogwrtRqst.Write


That's the Write location. checkpoint_segments is calculated against the 
Insert location. In a normal OLTP scenario they would be close to each 
other, but if you're doing a huge data load in a transaction; restoring 
from backup for example, they could be really far apart.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-17 Thread Simon Riggs
On Sun, 2007-06-17 at 08:51 +0100, Heikki Linnakangas wrote:

> > We don't really care about units because
> > the way you use it is to nudge it up a little and see if that works
> > etc..
> 
> Not necessarily. If it's given in KB/s, you might very well have an idea 
> of how much I/O your hardware is capable of, and set aside a fraction of 
> that for checkpoints.

I'm worried that people will think they can calculate the setting
without testing.

I guess with the right caveats in the docs about the need for testing to
ensure the values are suitable for your situation, I can accept KB/s. 

> > Can we avoid having another parameter? There must be some protection in
> > there to check that a checkpoint lasts for no longer than
> > checkpoint_timeout, so it makes most sense to vary the checkpoint in
> > relation to that parameter.
> 
> Sure, that's what checkpoint_write_percent is for. 

Yeh, I didn't understand the name.

> checkpoint_rate can 
> be used to finish the checkpoint faster, if there's not much work to do. 
> For example, if there's only 10 pages to flush in a checkpoint, 
> checkpoint_timeout is 30 minutes and checkpoint_write_percent = 50%, you 
> don't want to spread out those 10 writes over 15 minutes, that would be 
> just silly. checkpoint_rate sets the *minimum* rate used to write. If 
> writing at that minimum rate isn't enough to finish the checkpoint in 
> time, as defined by by checkpoint interval * checkpoint_write_percent, 
> we write more aggressively.
> 
> I'm more interested in checkpoint_write_percent myself as well, but Greg 
> Smith said he wanted the checkpoint to use a constant I/O rate and let 
> the length of the checkpoint to vary.

Having both parameters is good.

I'm really impressed with the results on the response time graphs.

> >> - The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
> >> Bgwriter now needs to deal immediate checkpoint requests, like those 
> >> coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
> >> from those triggered by checkpoint_segments. I'm afraid there might be 
> >> race conditions when a CHECKPOINT is issued at the same instant as 
> >> checkpoint_segments triggers one. What might happen then is that the 
> >> checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
> >> command has to wait for that to finish which might take a long time. I 
> >> have not been able to convince myself neither that the race condition 
> >> exists or that it doesn't.
> > 
> > Is there a mechanism for requesting immediate/non-immediate checkpoints?
> 
> No, CHECKPOINT requests an immediate one. Is there a use case for 
> CHECKPOINT LAZY?

I meant via the CreateCheckpoint API etc.

> > pg_start_backup() should be a normal checkpoint I think. No need for
> > backup to be an intrusive process.
> 
> Good point. A spread out checkpoint can take a long time to finish, 
> though. Is there risk for running into a timeout or something if it 
> takes say 10 minutes for a call to pg_start_backup to finish?

That would be annoying, but the alternative is for backups to seriously
effect performance, which would defeat the object of the HOT backup.
It's not like its immediate right now, so we'd probably be moving from
2-3 mins to 10 mins in your example. Most people are expecting their
backups to take a long time anyway, so thats OK. 

> > Do we need to know it so exactly that we look
> > at WALInsertLock? Maybe use info_lck to request the latest page, since
> > that is less heavily contended and we need never wait across I/O.
> 
> Is there such a value available, that's protected by just info_lck? I 
> can't see one.

XLogCtl->LogwrtRqst.Write

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-17 Thread Heikki Linnakangas

Simon Riggs wrote:

On Fri, 2007-06-15 at 11:34 +0100, Heikki Linnakangas wrote:

- What units should we use for the new GUC variables? From 
implementation point of view, it would be simplest if 
checkpoint_write_rate is given as pages/bgwriter_delay, similarly to 
bgwriter_*_maxpages. I never liked those *_maxpages settings, though, a 
more natural unit from users perspective would be KB/s.


checkpoint_maxpages would seem like a better name; we've already had
those _maxpages settings for 3 releases, so changing that is not really
an option (at so late a stage).


As Tom pointed out, we don't promise compatibility of conf-files over 
major releases. I wasn't actually thinking of changing any of the 
existing parameters, just thinking about the best name and behavior for 
the new ones.



We don't really care about units because
the way you use it is to nudge it up a little and see if that works
etc..


Not necessarily. If it's given in KB/s, you might very well have an idea 
of how much I/O your hardware is capable of, and set aside a fraction of 
that for checkpoints.



Can we avoid having another parameter? There must be some protection in
there to check that a checkpoint lasts for no longer than
checkpoint_timeout, so it makes most sense to vary the checkpoint in
relation to that parameter.


Sure, that's what checkpoint_write_percent is for. checkpoint_rate can 
be used to finish the checkpoint faster, if there's not much work to do. 
For example, if there's only 10 pages to flush in a checkpoint, 
checkpoint_timeout is 30 minutes and checkpoint_write_percent = 50%, you 
don't want to spread out those 10 writes over 15 minutes, that would be 
just silly. checkpoint_rate sets the *minimum* rate used to write. If 
writing at that minimum rate isn't enough to finish the checkpoint in 
time, as defined by by checkpoint interval * checkpoint_write_percent, 
we write more aggressively.


I'm more interested in checkpoint_write_percent myself as well, but Greg 
Smith said he wanted the checkpoint to use a constant I/O rate and let 
the length of the checkpoint to vary.


- The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
Bgwriter now needs to deal immediate checkpoint requests, like those 
coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
from those triggered by checkpoint_segments. I'm afraid there might be 
race conditions when a CHECKPOINT is issued at the same instant as 
checkpoint_segments triggers one. What might happen then is that the 
checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
command has to wait for that to finish which might take a long time. I 
have not been able to convince myself neither that the race condition 
exists or that it doesn't.


Is there a mechanism for requesting immediate/non-immediate checkpoints?


No, CHECKPOINT requests an immediate one. Is there a use case for 
CHECKPOINT LAZY?



pg_start_backup() should be a normal checkpoint I think. No need for
backup to be an intrusive process.


Good point. A spread out checkpoint can take a long time to finish, 
though. Is there risk for running into a timeout or something if it 
takes say 10 minutes for a call to pg_start_backup to finish?


- to coordinate the writes with with checkpoint_segments, we need to 
read the WAL insertion location. To do that, we need to acquire the 
WALInsertLock. That means that in the worst case, WALInsertLock is 
acquired every bgwriter_delay when a checkpoint is in progress. I don't 
think that's a problem, it's only held for a very short duration, but I 
thought I'd mention it.


I think that is a problem. 


Why?


Do we need to know it so exactly that we look
at WALInsertLock? Maybe use info_lck to request the latest page, since
that is less heavily contended and we need never wait across I/O.


Is there such a value available, that's protected by just info_lck? I 
can't see one.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-16 Thread Simon Riggs
On Sat, 2007-06-16 at 11:02 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > checkpoint_maxpages would seem like a better name; we've already had
> > those _maxpages settings for 3 releases, so changing that is not really
> > an option (at so late a stage).
> 
> Sure it is.  

Maybe, but we need a proposal before we agree to change anything.

If we can get rid of them, good, but if we can't I'd like to see a clear
posting of the parameters, their meaning and suggested names before we
commit this, please.

BTW, checkpoint_write_percent actually does what I wanted, but I wasn't
able to guess that from the name. checkpoint_duration_percent? 

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-16 Thread Tom Lane
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> checkpoint_maxpages would seem like a better name; we've already had
> those _maxpages settings for 3 releases, so changing that is not really
> an option (at so late a stage).

Sure it is.  We've never promised stability of obscure tuning settings.
For something as commonly set from-an-application as, say, work_mem
(nee sort_mem), it's worth worrying about backward compatibility.
But not for things that in practice would only be set from
postgresql.conf.  It's never been possible to just copy your old .conf
file without any thought when moving to a new release.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-16 Thread Simon Riggs
On Fri, 2007-06-15 at 13:11 +0200, Michael Paesold wrote:
> Heikki Linnakangas wrote:
> > Here's an updated WIP version of the LDC patch. I just spreads the 
> > writes, that achieves the goal of smoothing the checkpoint I/O spikes. I 
> > think sorting the writes etc. is interesting but falls in the category 
> > of further development and should be pushed to 8.4.
> 
> Why do you think so? Is it too much risk to adapt the sorted writes? The 
> numbers shown by ITAGAKI Takahiro looked quite impressive, at least for 
> large shared_buffers configurations. The reactions where rather 
> positive, too.

Agreed.

Seems like a simple isolated piece of code to include or not. If we
controlled it with a simple boolean parameter, that would allow testing
during beta - I agree it needs more testing.

If we find a way of automating it, cool. If its flaky, strip it out
before we release.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-16 Thread Simon Riggs
On Fri, 2007-06-15 at 11:34 +0100, Heikki Linnakangas wrote:

> - What units should we use for the new GUC variables? From 
> implementation point of view, it would be simplest if 
> checkpoint_write_rate is given as pages/bgwriter_delay, similarly to 
> bgwriter_*_maxpages. I never liked those *_maxpages settings, though, a 
> more natural unit from users perspective would be KB/s.

checkpoint_maxpages would seem like a better name; we've already had
those _maxpages settings for 3 releases, so changing that is not really
an option (at so late a stage). We don't really care about units because
the way you use it is to nudge it up a little and see if that works
etc..

Can we avoid having another parameter? There must be some protection in
there to check that a checkpoint lasts for no longer than
checkpoint_timeout, so it makes most sense to vary the checkpoint in
relation to that parameter.

> - The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
> Bgwriter now needs to deal immediate checkpoint requests, like those 
> coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
> from those triggered by checkpoint_segments. I'm afraid there might be 
> race conditions when a CHECKPOINT is issued at the same instant as 
> checkpoint_segments triggers one. What might happen then is that the 
> checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
> command has to wait for that to finish which might take a long time. I 
> have not been able to convince myself neither that the race condition 
> exists or that it doesn't.

Is there a mechanism for requesting immediate/non-immediate checkpoints?

pg_start_backup() should be a normal checkpoint I think. No need for
backup to be an intrusive process.

> - to coordinate the writes with with checkpoint_segments, we need to 
> read the WAL insertion location. To do that, we need to acquire the 
> WALInsertLock. That means that in the worst case, WALInsertLock is 
> acquired every bgwriter_delay when a checkpoint is in progress. I don't 
> think that's a problem, it's only held for a very short duration, but I 
> thought I'd mention it.

I think that is a problem. Do we need to know it so exactly that we look
at WALInsertLock? Maybe use info_lck to request the latest page, since
that is less heavily contended and we need never wait across I/O.

> - How should we deal with changing GUC variables that affect LDC, on the 
> fly when a checkpoint is in progress? The attached patch finishes the 
> in-progress checkpoint ASAP, and reloads the config after that. We could 
> reload the config immediately, but making the new settings effective 
> immediately is not trivial.

No need to do this during a checkpoint, there'll be another along
shortly anyhow.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Alvaro Herrera
Heikki Linnakangas wrote:
> Alvaro Herrera wrote:

> > if (BgWriterShmem->ckpt_time_warn && elapsed_secs < 
> > CheckPointWarning)
> > ereport(LOG,
> > (errmsg("checkpoints are occurring too 
> > frequently (%d seconds apart)",
> > elapsed_secs),
> >  errhint("Consider increasing the 
> >  configuration parameter 
> >  \"checkpoint_segments\".")));
> > BgWriterShmem->ckpt_time_warn = false;
> 
> In the extremely unlikely event that RequestCheckpoint sets 
> ckpt_time_warn right before it's cleared, after the test in the 
> if-statement, the warning is missed.

I think this code should look like this:

if (BgWriterShmem->ckpt_time_warn)
{
BgWriterShmem->chpt_time_warn = false;
if (elapsed_secs < CheckPointWarning)
ereport(LOG,
(errmsg("checkpoints are occurring too 
frequently (%d seconds apart)",
elapsed_secs),
 errhint("Consider increasing the 
configuration parameter \"checkpoint_segments\".")));
}

That way seems safer.  (I am assuming that a process other than the
bgwriter is able to set the ckpt_time_warn bit; otherwise there is no
point).  This is also used in pmsignal.c.  Of course, as you say, this
is probably very harmless, but in the other case it is important to get
it right.

-- 
Alvaro Herrera   http://www.PlanetPostgreSQL.org/
"Hackers share the surgeon's secret pleasure in poking about in gross innards,
the teenager's secret pleasure in popping zits." (Paul Graham)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Heikki Linnakangas

Alvaro Herrera wrote:

Heikki Linnakangas wrote:

- The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
Bgwriter now needs to deal immediate checkpoint requests, like those 
coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
from those triggered by checkpoint_segments. I'm afraid there might be 
race conditions when a CHECKPOINT is issued at the same instant as 
checkpoint_segments triggers one. What might happen then is that the 
checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
command has to wait for that to finish which might take a long time. I 
have not been able to convince myself neither that the race condition 
exists or that it doesn't.


Isn't it just a matter of having a flag to tell whether the checkpoint
should be quick or spread out, and have a command set the flag if a
checkpoint is already running?


Hmm. Thinking about this some more, the core problem is that when 
starting the checkpoint, bgwriter needs to read and clear the flag. 
Which is not atomic, as the patch stands.


I think we already have a race condition with ckpt_time_warn. The code 
to test and clear the flag does this:



if (BgWriterShmem->ckpt_time_warn && elapsed_secs < CheckPointWarning)
ereport(LOG,
(errmsg("checkpoints are occurring too frequently 
(%d seconds apart)",
elapsed_secs),
 errhint("Consider increasing the configuration parameter 
\"checkpoint_segments\".")));
BgWriterShmem->ckpt_time_warn = false;


In the extremely unlikely event that RequestCheckpoint sets 
ckpt_time_warn right before it's cleared, after the test in the 
if-statement, the warning is missed. That's a very harmless and 
theoretical event, you'd have to run CHECKPOINT (or another command that 
triggers a checkpoint) at the same instant that an xlog switch triggers 
one, and all that happens is that you don't get the message in the log 
while you should. So this is not something to worry about in this case, 
but it would be more severe if we had the same problem in deciding if a 
checkpoint should be spread out or not.


I think we just have to protect those signaling flags with a lock. It's 
not like it's on a critical path, and though we don't know what locks 
the callers to RequestCheckpoint hold, as long as we don't acquire any 
other locks while holding the new proposed lock, there's no danger of 
deadlocks.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Heikki Linnakangas

Michael Paesold wrote:

Heikki Linnakangas wrote:
Here's an updated WIP version of the LDC patch. I just spreads the 
writes, that achieves the goal of smoothing the checkpoint I/O spikes. 
I think sorting the writes etc. is interesting but falls in the 
category of further development and should be pushed to 8.4.


Why do you think so? Is it too much risk to adapt the sorted writes? The 
numbers shown by ITAGAKI Takahiro looked quite impressive, at least for 
large shared_buffers configurations. The reactions where rather 
positive, too.


Well, it is a very recent idea, and it's not clear that it's a win under 
all circumstances. Adopting that would need more testing, and at this 
late stage I'd like to just wrap up what we have and come back to this 
idea for 8.4.


If someone performs the tests with different hardware and workloads, I 
would be willing to consider it; the patch is still at an early stage 
but it's very isolated change and should therefore be easy to review. 
But if someone has the hardware and time perform those tests, I'd like 
them to perform more testing with just LDC first. As Josh pointed out, 
it would be good to test it with more oddball workloads, all the tests 
I've done this far have been with DBT-2.


In general, I am hoping that this patch, together with "Automatic 
adjustment of bgwriter_lru_maxpages" will finally make default 
postgresql configurations experience much less impact from check points. 
For my tast, postgresql has recently got way to many nobs which one must 
tweak by hand... I welcome any approach on auto-tuning (and auto vacuum!).


Sure, but that's another topic.


Patch status says "waiting on update from author":
http://archives.postgresql.org/pgsql-patches/2007-04/msg00331.php
Any updates on this?


No. I'm not actually clear what we're waiting for and from whom; I know 
I haven't had the time to review that in detail yet. IIRC we've seen two 
very similar patches in the discussions, one from Itagaki-san, and one 
from Greg Smith. I'm not sure which one of them we should use, they both 
implement roughly the same thing. But the biggest thing needed for that 
patch is testing with different workloads.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Alvaro Herrera
Heikki Linnakangas wrote:

> - The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
> Bgwriter now needs to deal immediate checkpoint requests, like those 
> coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
> from those triggered by checkpoint_segments. I'm afraid there might be 
> race conditions when a CHECKPOINT is issued at the same instant as 
> checkpoint_segments triggers one. What might happen then is that the 
> checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
> command has to wait for that to finish which might take a long time. I 
> have not been able to convince myself neither that the race condition 
> exists or that it doesn't.

Isn't it just a matter of having a flag to tell whether the checkpoint
should be quick or spread out, and have a command set the flag if a
checkpoint is already running?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Michael Paesold

Heikki Linnakangas wrote:
Here's an updated WIP version of the LDC patch. I just spreads the 
writes, that achieves the goal of smoothing the checkpoint I/O spikes. I 
think sorting the writes etc. is interesting but falls in the category 
of further development and should be pushed to 8.4.


Why do you think so? Is it too much risk to adapt the sorted writes? The 
numbers shown by ITAGAKI Takahiro looked quite impressive, at least for 
large shared_buffers configurations. The reactions where rather 
positive, too.


In general, I am hoping that this patch, together with "Automatic 
adjustment of bgwriter_lru_maxpages" will finally make default 
postgresql configurations experience much less impact from check points. 
For my tast, postgresql has recently got way to many nobs which one must 
tweak by hand... I welcome any approach on auto-tuning (and auto vacuum!).


Patch status says "waiting on update from author":
http://archives.postgresql.org/pgsql-patches/2007-04/msg00331.php
Any updates on this?

Best Regards
Michael Paesold


---(end of broadcast)---
TIP 6: explain analyze is your friend


[PATCHES] Load Distributed Checkpoints, revised patch

2007-06-15 Thread Heikki Linnakangas
Here's an updated WIP version of the LDC patch. I just spreads the 
writes, that achieves the goal of smoothing the checkpoint I/O spikes. I 
think sorting the writes etc. is interesting but falls in the category 
of further development and should be pushed to 8.4.


The documentation changes are not complete, GUC variables need 
descriptions, and some of the DEBUG elogs will go away in favor of the 
separate checkpoint logging patch that's in the queue. I'm fairly happy 
with the code now, but there's a few minor open issues:


- What units should we use for the new GUC variables? From 
implementation point of view, it would be simplest if 
checkpoint_write_rate is given as pages/bgwriter_delay, similarly to 
bgwriter_*_maxpages. I never liked those *_maxpages settings, though, a 
more natural unit from users perspective would be KB/s.


- The signaling between RequestCheckpoint and bgwriter is a bit tricky. 
Bgwriter now needs to deal immediate checkpoint requests, like those 
coming from explicit CHECKPOINT or CREATE DATABASE commands, differently 
from those triggered by checkpoint_segments. I'm afraid there might be 
race conditions when a CHECKPOINT is issued at the same instant as 
checkpoint_segments triggers one. What might happen then is that the 
checkpoint is performed lazily, spreading the writes, and the CHECKPOINT 
command has to wait for that to finish which might take a long time. I 
have not been able to convince myself neither that the race condition 
exists or that it doesn't.



A few notes about the implementation:

- in bgwriter loop, CheckArchiveTimeout always calls time(NULL), while 
previously it used the value returned by another call earlier in the 
same codepath. That means we now call time(NULL) twice instead of once 
per bgwriter iteration, when archive_timout is set. That doesn't seem 
significant to me, so I didn't try to optimize it.


- because of a small change in the meaning of force_checkpoint flag in 
bgwriter loop, checkpoints triggered by reaching checkpoint_segments 
call CreateCheckPoint(false, false) instead of CreateCheckPoint(false, 
true). That second argument is the "force"-flag. If it's false, 
CreateCheckPoint skips the checkpoint if there's been no WAL activity 
since last checkpoint. It doesn't matter in this case, there surely has 
been WAL activity if we reach checkpoint_segments, and doing the check 
isn't that expensive.


- to coordinate the writes with with checkpoint_segments, we need to 
read the WAL insertion location. To do that, we need to acquire the 
WALInsertLock. That means that in the worst case, WALInsertLock is 
acquired every bgwriter_delay when a checkpoint is in progress. I don't 
think that's a problem, it's only held for a very short duration, but I 
thought I'd mention it.


- How should we deal with changing GUC variables that affect LDC, on the 
fly when a checkpoint is in progress? The attached patch finishes the 
in-progress checkpoint ASAP, and reloads the config after that. We could 
reload the config immediately, but making the new settings effective 
immediately is not trivial.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: doc/src/sgml/config.sgml
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.126
diff -c -r1.126 config.sgml
*** doc/src/sgml/config.sgml	7 Jun 2007 19:19:56 -	1.126
--- doc/src/sgml/config.sgml	15 Jun 2007 09:35:32 -
***
*** 1565,1570 
--- 1565,1586 

   
  
+  
+   checkpoint_write_percent (floating point)
+   
+checkpoint_write_percent configuration parameter
+   
+   
+
+ To spread works in checkpoints, each checkpoint spends the specified
+ time and delays to write out all dirty buffers in the shared buffer
+ pool. The default value is 50.0 (50% of checkpoint_timeout).
+ This parameter can only be set in the postgresql.conf
+ file or on the server command line.
+
+   
+  
+ 
   
checkpoint_warning (integer)

Index: src/backend/access/transam/xlog.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xlog.c,v
retrieving revision 1.272
diff -c -r1.272 xlog.c
*** src/backend/access/transam/xlog.c	31 May 2007 15:13:01 -	1.272
--- src/backend/access/transam/xlog.c	15 Jun 2007 08:14:18 -
***
*** 398,404 
  static void exitArchiveRecovery(TimeLineID endTLI,
  	uint32 endLogId, uint32 endLogSeg);
  static bool recoveryStopsHere(XLogRecord *record, bool *includeThis);
! static void CheckPointGuts(XLogRecPtr checkPointRedo);
  
  static bool XLogCheckBuffer(XLogRecData *rdata, bool doPageWrites,
  XLogRecPtr *lsn, BkpBlock *bkpb);
--- 398,404 
  static void exitArchiveRecovery(TimeLineID endTLI,