On 06/26/2015 02:08 PM, Heikki Linnakangas wrote:
I'm not sure what to do about this. With the attached patch, you get the
same leisurely pacing with restartpoints as you get with checkpoints,
but you exceed max_wal_size during recovery, by the amount determined by
checkpoint_completion_target.
On Fri, Jun 26, 2015 at 7:08 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
I'm not sure what to do about this. With the attached patch, you get the
same leisurely pacing with restartpoints as you get with checkpoints, but
you exceed max_wal_size during recovery, by the amount determined by
On Fri, Jun 26, 2015 at 9:47 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 06/26/2015 03:40 PM, Robert Haas wrote:
Actually, I've seen a number of presentations indicating
that the pacing of checkpoints is already too aggressive near the
beginning, because as soon as we initiate the
On 05/27/2015 12:26 AM, Jeff Janes wrote:
On Thu, May 21, 2015 at 8:40 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, May 21, 2015 at 3:53 PM, Jeff Janes jeff.ja...@gmail.com wrote:
One of the points of max_wal_size and its predecessor is to limit how big
pg_xlog can grow. But running
On 06/26/2015 03:40 PM, Robert Haas wrote:
Actually, I've seen a number of presentations indicating
that the pacing of checkpoints is already too aggressive near the
beginning, because as soon as we initiate the checkpoint we have a
storm of full page writes. I'm sure we can come up with
On Thu, May 21, 2015 at 8:40 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, May 21, 2015 at 3:53 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Mar 16, 2015 at 11:05 PM, Jeff Janes jeff.ja...@gmail.com
wrote:
On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas
On Mon, Mar 16, 2015 at 11:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Everyone seems to be happy with the names and behaviour of the GUCs, so
committed.
The docs suggest that max_wal_size will be
On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
Everyone seems to be happy with the names and behaviour of the GUCs, so
committed.
The docs suggest that max_wal_size will be respected during archive
recovery (causing restartpoints and recycling), but I'm
Heikki,
On Monday, March 2, 2015, Heikki Linnakangas hlinn...@iki.fi wrote:
On 03/02/2015 08:05 PM, Josh Berkus wrote:
On 03/02/2015 05:38 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas hlinn...@iki.fi
wrote:
On
* Robert Haas (robertmh...@gmail.com) wrote:
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was discussed on this thread.
No conclusion was reached on that. Me and some others were against
raising the default, while others
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was discussed on this
thread.
No conclusion was reached
On 03/02/2015 08:05 PM, Josh Berkus wrote:
On 03/02/2015 05:38 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise
On 02/23/2015 08:56 AM, Heikki Linnakangas wrote:
Everyone seems to be happy with the names and behaviour of the GUCs, so
committed.
Yay!
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was discussed on this thread.
When I
On 2015-02-22 21:24:56 -0500, Robert Haas wrote:
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek p...@2ndquadrant.com wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of segments
and not size units, maybe
On 02/23/2015 01:01 PM, Andres Freund wrote:
On 2015-02-22 21:24:56 -0500, Robert Haas wrote:
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek p...@2ndquadrant.com wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of
segments and not size units, maybe it would be worth to change it also?
And the other thing is that, if set, the wal_keep_segments is the real
max_wal_size from
On Sat, Feb 14, 2015 at 4:43 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
On 02/04/2015 11:41 PM, Josh Berkus wrote:
On 02/04/2015 12:06 PM, Robert Haas wrote:
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus j...@agliodbs.com wrote:
Let me push max_wal_size and min_wal_size again as
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek p...@2ndquadrant.com wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of segments
and not size units, maybe it would be worth to change it also?
And the other
On 23/02/15 03:24, Robert Haas wrote:
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek p...@2ndquadrant.com wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of segments
and not size units, maybe it would be
On 13/02/15 18:43, Heikki Linnakangas wrote:
Ok, I don't hear any loud objections to min_wal_size and max_wal_size,
so let's go with that then.
Attached is a new version of this. It now comes in four patches. The
first three are just GUC-related preliminary work, the first of which I
posted on
On 02/04/2015 11:41 PM, Josh Berkus wrote:
On 02/04/2015 12:06 PM, Robert Haas wrote:
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus j...@agliodbs.com wrote:
Let me push max_wal_size and min_wal_size again as our new parameter
names, because:
* does what it says on the tin
* new user friendly
*
On 02/05/2015 01:28 PM, Robert Haas wrote:
On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus j...@agliodbs.com wrote:
Except that, when setting up servers for customers, one thing I pretty
much always do for them is temporarily increase checkpoint_segments for
the initial data load. So having
On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus j...@agliodbs.com wrote:
Except that, when setting up servers for customers, one thing I pretty
much always do for them is temporarily increase checkpoint_segments for
the initial data load. So having Postgres do this automatically would
be a
On 02/05/2015 11:28 PM, Robert Haas wrote:
On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus j...@agliodbs.com wrote:
Default of 4 for min_wal_size?
I assume you mean 4 segments; why not 3 as currently? As long as the
system has the latitude to ratchet it up when needed, there seems to
be little
On 02/04/2015 04:16 PM, David Steele wrote:
On 2/4/15 3:06 PM, Robert Haas wrote:
Hmmm, I see your point. I spend a lot of time on AWS and in
container-world, where disk space is a lot more constrained. However,
it probably makes more sense to recommend non-default settings for that
On 2015-02-05 09:42:37 -0500, Robert Haas wrote:
I previously proposed 100 segments, or 1.6GB. If that seems too
large, how about 64 segments, or 1.024GB? I think there will be few
people who can't tolerate a gigabyte of xlog under peak load, and an
awful lot who will benefit from it.
It'd
On 02/05/2015 04:47 PM, Andres Freund wrote:
On 2015-02-05 09:42:37 -0500, Robert Haas wrote:
I previously proposed 100 segments, or 1.6GB. If that seems too
large, how about 64 segments, or 1.024GB? I think there will be few
people who can't tolerate a gigabyte of xlog under peak load, and
On Wed, Feb 4, 2015 at 4:41 PM, Josh Berkus j...@agliodbs.com wrote:
That's certainly better, but I think we should go further. Again,
you're not committed to using this space all the time, and if you are
using it you must have a lot of write activity, which means you are
not running on a tin
Default of 4 for min_wal_size?
I assume you mean 4 segments; why not 3 as currently? As long as the
system has the latitude to ratchet it up when needed, there seems to
be little advantage to raising the minimum. Of course I guess there
must be some advantage or Heikki wouldn't have made
On Thu, Feb 5, 2015 at 4:42 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Actually, perhaps we should have a boolean setting that just implies
min=max, instead of having a configurable minimum?. That would cover all of
those reasons pretty well. So we would have a max_wal_size setting,
On 2/5/15 4:53 PM, Josh Berkus wrote:
Actually, perhaps we should have a boolean setting that just implies
min=max, instead of having a configurable minimum?. That would cover all
of those reasons pretty well. So we would have a max_wal_size setting,
and a boolean preallocate_all_wal = on |
On 02/05/2015 01:42 PM, Heikki Linnakangas wrote:
There are a few reasons for making the minimum configurable:
Any thoughts on what the default minimum should be, if the default max
is 1.1GB/64?
1. Creating new segments when you need them is not free, so if you have
a workload with occasional
On Tue, Feb 3, 2015 at 4:18 PM, Josh Berkus j...@agliodbs.com wrote:
That's different from our current checkpoint_segments setting. With
checkpoint_segments, the documented formula for calculating the disk usage
is (2 + checkpoint_completion_target) * checkpoint_segments * 16 MB. That's
a lot
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus j...@agliodbs.com wrote:
Let me push max_wal_size and min_wal_size again as our new parameter
names, because:
* does what it says on the tin
* new user friendly
* encourages people to express it in MB, not segments
* very different from the old
On 02/04/2015 09:28 AM, Robert Haas wrote:
On Tue, Feb 3, 2015 at 4:18 PM, Josh Berkus j...@agliodbs.com wrote:
That's different from our current checkpoint_segments setting. With
checkpoint_segments, the documented formula for calculating the disk usage
is (2 + checkpoint_completion_target) *
On 02/04/2015 12:06 PM, Robert Haas wrote:
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus j...@agliodbs.com wrote:
Let me push max_wal_size and min_wal_size again as our new parameter
names, because:
* does what it says on the tin
* new user friendly
* encourages people to express it in MB,
On Thu, Feb 5, 2015 at 3:11 AM, Josh Berkus j...@agliodbs.com wrote:
On 02/04/2015 12:06 PM, Robert Haas wrote:
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus j...@agliodbs.com wrote:
Let me push max_wal_size and min_wal_size again as our new
parameter
names, because:
* does what it says
Missed adding pgsql-hackers group while replying.
Regards,
Venkata Balaji N
On Thu, Feb 5, 2015 at 12:48 PM, Venkata Balaji N nag1...@gmail.com wrote:
On Fri, Jan 30, 2015 at 7:58 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 01/30/2015 04:48 AM, Venkata Balaji N wrote:
I
On 2/4/15 3:06 PM, Robert Haas wrote:
Hmmm, I see your point. I spend a lot of time on AWS and in
container-world, where disk space is a lot more constrained. However,
it probably makes more sense to recommend non-default settings for that
environment, since it requires non-default settings
On 2/4/15 6:16 PM, David Steele wrote:
On 2/4/15 3:06 PM, Robert Haas wrote:
Hmmm, I see your point. I spend a lot of time on AWS and in
container-world, where disk space is a lot more constrained. However,
it probably makes more sense to recommend non-default settings for that
environment,
On 02/02/2015 04:21 PM, Andres Freund wrote:
Hi,
On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at least an
order of magnitude larger than the current default setting.
+1
I
Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 02/02/2015 04:21 PM, Andres Freund wrote:
On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at
least an order of magnitude
On Tue, Feb 3, 2015 at 7:31 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/02/2015 03:36 PM, Robert Haas wrote:
Second, I*think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named symmetrically.
Basically, I think you've got
On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Works for me. However, note that max_checkpoint_segments = 10 doesn't
mean
the same as current checkpoint_segments = 10. With checkpoint_segments
=
10 you end up with about 2x-3x as much WAL as with
On 03/02/15 16:50, Robert Haas wrote:
On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That's the whole point of this patch. max_checkpoint_segments = 10, or
max_checkpoint_segments = 160 MB, means that the system will begin a
checkpoint so that when the
On 02/03/2015 05:19 PM, Robert Haas wrote:
On Tue, Feb 3, 2015 at 7:31 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/02/2015 03:36 PM, Robert Haas wrote:
Second, I*think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named
On 02/02/2015 03:36 PM, Robert Haas wrote:
Second, I*think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named symmetrically.
Basically, I think you've got min_checkpoint_segments (the number of
recycled segments we keep around always) and
On 02/03/2015 07:50 AM, Robert Haas wrote:
On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That's the whole point of this patch. max_checkpoint_segments = 10, or
max_checkpoint_segments = 160 MB, means that the system will begin a
checkpoint so that when the
On Fri, Jan 30, 2015 at 3:58 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
During my tests, I did not observe the significance of
min_recycle_wal_size
parameter yet. Ofcourse, i had sufficient disk space for pg_xlog.
I would like to understand more about min_recycle_wal_size
Hi,
On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at least an
order of magnitude larger than the current default setting.
+1
I think we need to increase checkpoint_timeout
On 01/30/2015 04:48 AM, Venkata Balaji N wrote:
I performed series of tests for this patch and would like to share the
results. My comments are in-line.
Thanks for the testing!
*Test 1 :*
In this test, i see removed+recycled segments = 3 (except for the first 3
checkpoint cycles) and has
On Fri, Jan 2, 2015 at 3:27 PM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
On 01/01/2015 03:24 AM, Josh Berkus wrote:
Please remind me because I'm having trouble finding this in the
archives: how does wal_keep_segments interact with the new settings?
It's not very straightforward.
On 01/04/2015 11:44 PM, Josh Berkus wrote:
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high
On 2015-01-05 11:34:54 +0200, Heikki Linnakangas wrote:
On 01/04/2015 11:44 PM, Josh Berkus wrote:
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation
On 01/05/2015 12:06 PM, Andres Freund wrote:
On 2015-01-05 11:34:54 +0200, Heikki Linnakangas wrote:
On 01/04/2015 11:44 PM, Josh Berkus wrote:
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
On 01/05/2015 09:06 AM, Heikki Linnakangas wrote:
I wasn't clear on my opinion here. I think I understood what Josh meant,
but I don't think we should do it. Seems like unnecessary nannying of
the DBA. Let's just mention in the manual that if you set
wal_keep_segments higher than [insert
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high enough, checkpoint_wal_size will be
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high enough, checkpoint_wal_size will be
exceeded. The other alternative would be to force a
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high enough, checkpoint_wal_size will be
exceeded. The other alternative would be to force a checkpoint earlier,
i.e. lower
(reviving an old thread)
On 08/24/2013 12:53 AM, Josh Berkus wrote:
On 08/23/2013 02:08 PM, Heikki Linnakangas wrote:
Here's a bigger patch, which does more. It is based on the ideas in the
post I started this thread with, with feedback incorporated from the
long discussion. With this patch,
On 09/01/2013 10:37 AM, Amit Kapila wrote:
On Sat, Aug 24, 2013 at 2:38 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
a.
In XLogFileInit(),
/*
! * XXX: What should we use as max_segno? We used to use XLOGfileslop when
! * that was a constant, but that was always a bit dubious:
Heikki,
Thanks for getting back to this! I really look forward to simplifying
WAL tuning for users.
min_recycle_wal_size
checkpoint_wal_size
snip
These settings are fairly intuitive for a DBA to tune. You begin by
figuring out how much disk space you can afford to spend on WAL, and set
On Sat, Aug 24, 2013 at 12:08:30AM +0300, Heikki Linnakangas wrote:
You can also set min_recycle_wal_size = checkpoint_wal_size, which
gets you the same behavior as without the patch, except that it's
more intuitive to set it in terms of MB of WAL space required,
instead of # of segments
On Sat, Aug 24, 2013 at 2:38 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 03.07.2013 21:28, Peter Eisentraut wrote:
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
Here's a patch implementing that. Docs not updated yet. I did not change
the way checkpoint_segments triggers
On 03.07.2013 21:28, Peter Eisentraut wrote:
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
Here's a patch implementing that. Docs not updated yet. I did not change
the way checkpoint_segments triggers checkpoints - that'll can be a
separate patch. This only decouples the segment preallocation
On 08/23/2013 02:08 PM, Heikki Linnakangas wrote:
Here's a bigger patch, which does more. It is based on the ideas in the
post I started this thread with, with feedback incorporated from the
long discussion. With this patch, WAL disk space usage is controlled by
two GUCs:
On 07/03/2013 11:28 AM, Peter Eisentraut wrote:
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
I don't understand what this patch, by itself, will accomplish in terms
of the originally stated goals of making checkpoint_segments easier to
tune, and controlling disk space used. To some degree, it
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
On 06.06.2013 20:24, Josh Berkus wrote:
Yeah, something like that :-). I was thinking of letting the estimate
decrease like a moving average, but react to any increases immediately.
Same thing we do in bgwriter to track buffer allocations:
Seems
On 6/7/13 2:43 PM, Robert Haas wrote:
name. What I would like to see is a single number here in memory units that
replaces both checkpoint_segments and wal_keep_segments.
This isn't really making sense to me. I don't think we should assume
that someone who wants to keep WAL around for
On 06/07/2013 01:00 AM, Josh Berkus wrote:
Daniel,
So your suggestion is that if archiving is falling behind, we should
introduce delays on COMMIT in order to slow down the rate of WAL writing?
Delaying commit wouldn't be enough; consider a huge COPY, which can
produce a lot of WAL at a high
On 06/06/2013 03:21 PM, Joshua D. Drake wrote:
Not to be unkind but the problems of the uniformed certainly are not
the problems of the informed. Or perhaps they are certainly the
problems of the informed :P.
I'm not convinced that's a particularly good argument not to improve
something. Sure,
On Thu, Jun 6, 2013 at 10:43 PM, Greg Smith g...@2ndquadrant.com wrote:
The general complaint the last time I suggested a change in this area, to
make checkpoint_segments larger for the average user, was that some people
had seen workloads where that was counterproductive. Pretty sure Kevin
Robert Haas robertmh...@gmail.com wrote:
(As to why smaller checkpoint_segments can help, here's my guess:
if checkpoint_segments is relatively small, then when we recycle
a segment we're likely to find its data already in cache. That's
a lot better than reading it back in from disk just to
On Fri, Jun 7, 2013 at 3:14 PM, Kevin Grittner kgri...@ymail.com wrote:
Some findings were unsurprising, like that a direct connection
between the servers using a cross-wired network patch cable was
faster than plugging both machines into the same switch. But we
tested all of our assumptions,
Robert Haas robertmh...@gmail.com wrote:
Kevin Grittner kgri...@ymail.com wrote:
One such surprise was that the conversion ran faster, even on a
largish database of around 200GB, with 3 checkpoint_segments
than with larger settings.
!
I can't account for that finding, because my
On Wed, Jun 5, 2013 at 10:27 PM, Joshua D. Drake j...@commandprompt.com wrote:
On 6/5/2013 10:07 PM, Daniel Farina wrote:
If I told you there were some of us who would prefer to attenuate the
rate that things get written rather than cancel or delay archiving for
a long period of time, would
On 6/5/2013 10:54 PM, Peter Geoghegan wrote:
On Wed, Jun 5, 2013 at 10:27 PM, Joshua D. Drake j...@commandprompt.com wrote:
I just wonder if we are looking in the right place (outside of some obvious
badness like the PANIC running out of disk space).
So you don't think we should PANIC on
On Wed, Jun 5, 2013 at 11:05 PM, Joshua D. Drake j...@commandprompt.com wrote:
On 6/5/2013 10:54 PM, Peter Geoghegan wrote:
On Wed, Jun 5, 2013 at 10:27 PM, Joshua D. Drake j...@commandprompt.com
wrote:
I just wonder if we are looking in the right place (outside of some
obvious
badness
Hi,
On Wed, Jun 5, 2013 at 11:05 PM, Joshua D. Drake j...@commandprompt.comwrote:
On 6/5/2013 10:54 PM, Peter Geoghegan wrote:
On Wed, Jun 5, 2013 at 10:27 PM, Joshua D. Drake j...@commandprompt.com
wrote:
Instead of running out of disk space PANIC we should just write to an
emergency
On 6/5/2013 11:09 PM, Daniel Farina wrote:
Instead of running out of disk space PANIC we should just write to an
emergency location within PGDATA and log very loudly that the SA isn't
paying attention. Perhaps if that area starts to get to an unhappy place we
immediately bounce into read-only
On Wed, Jun 5, 2013 at 11:28 PM, Joshua D. Drake j...@commandprompt.com wrote:
I have zero doubt that in your case it is true and desirable. I just don't
know that it is a positive solution to the problem as a whole. Your case is
rather limited to your environment, which is rather limited to
On 6/5/2013 11:25 PM, Harold Giménez wrote:
Instead of running out of disk space PANIC we should just write
to an emergency location within PGDATA
This merely buys you some time, but with aggressive and sustained
write throughput you are left on the same spot. Practically speaking
On 6/5/2013 11:31 PM, Peter Geoghegan wrote:
On Wed, Jun 5, 2013 at 11:28 PM, Joshua D. Drake j...@commandprompt.com wrote:
I have zero doubt that in your case it is true and desirable. I just don't
know that it is a positive solution to the problem as a whole. Your case is
rather limited to
On 06.06.2013 06:20, Joshua D. Drake wrote:
3. The spread checkpoints have always confused me. If anything we want a
checkpoint to be fast and short because:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will
On 05.06.2013 23:16, Josh Berkus wrote:
For limiting the time required to recover after crash,
checkpoint_segments is awkward because it's difficult to calculate how
long recovery will take, given checkpoint_segments=X. A bulk load can
use up segments really fast, and recovery will be fast,
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will
overwhelm the I/O subsystem, and slow down queries. That's what we saw
before spread checkpoints: when a
On 06.06.2013 11:42, Joshua D. Drake wrote:
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
Yes checkpoint_segments is awkward. We shouldn't have to set it at all.
It should be gone.
The point of having checkpoint_segments or max_wal_size is to put a
limit (albeit a soft one) on the amount of
On 05.06.2013 22:18, Kevin Grittner wrote:
Heikki Linnakangashlinnakan...@vmware.com wrote:
I was not thinking of making it a hard limit. It would be just
like checkpoint_segments from that point of view - if a
checkpoint takes a long time, max_wal_size might still be
exceeded.
Then I
On 05.06.2013 22:24, Fujii Masao wrote:
On Thu, Jun 6, 2013 at 3:35 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
The checkpoint spreading code already tracks if the checkpoint is on
schedule, and it takes into account both checkpoint_timeout and
checkpoint_segments. Ie. if you consume
Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 05.06.2013 22:18, Kevin Grittner wrote:
Heikki Linnakangashlinnakan...@vmware.com wrote:
I was not thinking of making it a hard limit. It would be just
like checkpoint_segments from that point of view - if a
checkpoint takes a long time,
On 06.06.2013 15:31, Kevin Grittner wrote:
Heikki Linnakangashlinnakan...@vmware.com wrote:
On 05.06.2013 22:18, Kevin Grittner wrote:
Heikki Linnakangashlinnakan...@vmware.com wrote:
I was not thinking of making it a hard limit. It would be just
like checkpoint_segments from that point
Daniel,
So your suggestion is that if archiving is falling behind, we should
introduce delays on COMMIT in order to slow down the rate of WAL writing?
Just so I'm clear.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list
Then I suggest we not use exactly that name. I feel quite sure we
would get complaints from people if something labeled as max was
exceeded -- especially if they set that to the actual size of a
filesystem dedicated to WAL files.
You're probably right. Any suggestions for a better name?
On Wed, Jun 5, 2013 at 8:20 PM, Joshua D. Drake j...@commandprompt.comwrote:
On 06/05/2013 05:37 PM, Robert Haas wrote:
- If it looks like we're going to exceed limit #3 before the
checkpoint completes, we start exerting back-pressure on writers by
making them wait every time they write
On Thu, Jun 6, 2013 at 1:42 AM, Joshua D. Drake j...@commandprompt.comwrote:
I may be confused but it is my understanding that bgwriter writes out the
data from the shared buffer cache that is dirty based on an interval and a
max pages written.
It primarily writes out based on how many
On 06.06.2013 20:24, Josh Berkus wrote:
Yeah, something like that :-). I was thinking of letting the estimate
decrease like a moving average, but react to any increases immediately.
Same thing we do in bgwriter to track buffer allocations:
Seems reasonable.
Here's a patch implementing that.
Given the behavior of xlog, I'd want to adjust the
algo so that peak usage on a 24-hour basis would affect current
preallocation. That is, if a site regularly has a peak from 2-3pm where
they're using 180 segments/cycle, then they should still be somewhat
higher at 2am than a database which
On 6/6/13 4:42 AM, Joshua D. Drake wrote:
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will
overwhelm the I/O subsystem, and slow down queries. That's what we saw
On 6/6/13 4:41 AM, Heikki Linnakangas wrote:
I was thinking of letting the estimate
decrease like a moving average, but react to any increases immediately.
Same thing we do in bgwriter to track buffer allocations:
Combine what your submitted patch does and this idea, and you'll have
1 - 100 of 115 matches
Mail list logo