On 06/26/2015 02:08 PM, Heikki Linnakangas wrote:
I'm not sure what to do about this. With the attached patch, you get the
same leisurely pacing with restartpoints as you get with checkpoints,
but you exceed max_wal_size during recovery, by the amount determined by
checkpoint_completion_target. A
On Fri, Jun 26, 2015 at 9:47 AM, Heikki Linnakangas wrote:
> On 06/26/2015 03:40 PM, Robert Haas wrote:
>> Actually, I've seen a number of presentations indicating
>> that the pacing of checkpoints is already too aggressive near the
>> beginning, because as soon as we initiate the checkpoint we ha
On 06/26/2015 03:40 PM, Robert Haas wrote:
Actually, I've seen a number of presentations indicating
that the pacing of checkpoints is already too aggressive near the
beginning, because as soon as we initiate the checkpoint we have a
storm of full page writes. I'm sure we can come up with arbitra
On Fri, Jun 26, 2015 at 7:08 AM, Heikki Linnakangas wrote:
> I'm not sure what to do about this. With the attached patch, you get the
> same leisurely pacing with restartpoints as you get with checkpoints, but
> you exceed max_wal_size during recovery, by the amount determined by
> checkpoint_comp
On 05/27/2015 12:26 AM, Jeff Janes wrote:
On Thu, May 21, 2015 at 8:40 AM, Fujii Masao wrote:
On Thu, May 21, 2015 at 3:53 PM, Jeff Janes wrote:
One of the points of max_wal_size and its predecessor is to limit how big
pg_xlog can grow. But running out of disk space on pg_xlog is no more
f
On Thu, May 21, 2015 at 8:40 AM, Fujii Masao wrote:
> On Thu, May 21, 2015 at 3:53 PM, Jeff Janes wrote:
> > On Mon, Mar 16, 2015 at 11:05 PM, Jeff Janes
> wrote:
> >>
> >> On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas
> >> wrote:
> >>>
> >>>
> >>> Everyone seems to be happy with the nam
On 21 May 2015 at 02:53, Jeff Janes wrote:
> I think the old behavior, where restartpoints were driven only by time and
> not by volume, was a misfeature.
>
I have no objection to changing that. The main essence of that was to
ensure that a standby could act differently to a master, given diffe
On Thu, May 21, 2015 at 3:53 PM, Jeff Janes wrote:
> On Mon, Mar 16, 2015 at 11:05 PM, Jeff Janes wrote:
>>
>> On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas
>> wrote:
>>>
>>>
>>> Everyone seems to be happy with the names and behaviour of the GUCs, so
>>> committed.
>>
>>
>>
>> The docs sug
On Mon, Mar 16, 2015 at 11:05 PM, Jeff Janes wrote:
> On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas <
> hlinnakan...@vmware.com> wrote:
>
>>
>> Everyone seems to be happy with the names and behaviour of the GUCs, so
>> committed.
>
>
>
> The docs suggest that max_wal_size will be respected
On Mon, Feb 23, 2015 at 8:56 AM, Heikki Linnakangas wrote:
>
> Everyone seems to be happy with the names and behaviour of the GUCs, so
> committed.
The docs suggest that max_wal_size will be respected during archive
recovery (causing restartpoints and recycling), but I'm not seeing that
happen
On 03/02/2015 12:23 PM, Heikki Linnakangas wrote:
> On 03/02/2015 08:05 PM, Josh Berkus wrote:
>> That was the impression I had too, which was why I was surprised. The
>> last post on the topic was one by Robert Haas, agreeing with me on a
>> value of 1GB, and there were zero objections after that
Heikki,
On Monday, March 2, 2015, Heikki Linnakangas wrote:
> On 03/02/2015 08:05 PM, Josh Berkus wrote:
>
>> On 03/02/2015 05:38 AM, Stephen Frost wrote:
>>
>>> * Robert Haas (robertmh...@gmail.com) wrote:
>>>
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas
wrote:
> On 02
On 03/02/2015 08:05 PM, Josh Berkus wrote:
On 03/02/2015 05:38 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas wrote:
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for m
On 03/02/2015 05:38 AM, Stephen Frost wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas wrote:
>>> On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for max_wal_size to
something much
* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas wrote:
> > On 02/26/2015 01:32 AM, Josh Berkus wrote:
> >> But ... I thought we were going to raise the default for max_wal_size to
> >> something much higher, like 1GB? That's what was discussed on
On Mon, Mar 2, 2015 at 6:43 AM, Heikki Linnakangas wrote:
> On 02/26/2015 01:32 AM, Josh Berkus wrote:
>> But ... I thought we were going to raise the default for max_wal_size to
>> something much higher, like 1GB? That's what was discussed on this
>> thread.
>
> No conclusion was reached on that
On 02/26/2015 01:32 AM, Josh Berkus wrote:
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was discussed on this thread.
No conclusion was reached on that. Me and some others were against
raising the default, while others w
On 02/23/2015 08:56 AM, Heikki Linnakangas wrote:
> Everyone seems to be happy with the names and behaviour of the GUCs, so
> committed.
Yay!
But ... I thought we were going to raise the default for max_wal_size to
something much higher, like 1GB? That's what was discussed on this thread.
When
On 02/23/2015 01:01 PM, Andres Freund wrote:
On 2015-02-22 21:24:56 -0500, Robert Haas wrote:
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of segments
and no
On 2015-02-22 21:24:56 -0500, Robert Haas wrote:
> On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek wrote:
> > I am wondering a bit about interaction with wal_keep_segments.
> > One thing is that wal_keep_segments is still specified in number of segments
> > and not size units, maybe it would be wor
On Sat, Feb 14, 2015 at 4:43 AM, Heikki Linnakangas wrote:
> On 02/04/2015 11:41 PM, Josh Berkus wrote:
>
>> On 02/04/2015 12:06 PM, Robert Haas wrote:
>>
>>> On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus wrote:
>>>
Let me push "max_wal_size" and "min_wal_size" again as our new parameter
On 23/02/15 03:24, Robert Haas wrote:
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek wrote:
I am wondering a bit about interaction with wal_keep_segments.
One thing is that wal_keep_segments is still specified in number of segments
and not size units, maybe it would be worth to change it also?
On Sat, Feb 21, 2015 at 11:29 PM, Petr Jelinek wrote:
> I am wondering a bit about interaction with wal_keep_segments.
> One thing is that wal_keep_segments is still specified in number of segments
> and not size units, maybe it would be worth to change it also?
> And the other thing is that, if s
>
> I am wondering a bit about interaction with wal_keep_segments.
> One thing is that wal_keep_segments is still specified in number of
> segments and not size units, maybe it would be worth to change it also?
> And the other thing is that, if set, the wal_keep_segments is the real
> max_wal_size
On 13/02/15 18:43, Heikki Linnakangas wrote:
Ok, I don't hear any loud objections to min_wal_size and max_wal_size,
so let's go with that then.
Attached is a new version of this. It now comes in four patches. The
first three are just GUC-related preliminary work, the first of which I
posted on
On 02/04/2015 11:41 PM, Josh Berkus wrote:
On 02/04/2015 12:06 PM, Robert Haas wrote:
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus wrote:
Let me push "max_wal_size" and "min_wal_size" again as our new parameter
names, because:
* does what it says on the tin
* new user friendly
* encourages peo
On 2/5/15 4:53 PM, Josh Berkus wrote:
>> Actually, perhaps we should have a boolean setting that just implies
>> min=max, instead of having a configurable minimum?. That would cover all
>> of those reasons pretty well. So we would have a "max_wal_size" setting,
>> and a boolean "preallocate_all_wal
On 02/05/2015 01:42 PM, Heikki Linnakangas wrote:
> There are a few reasons for making the minimum configurable:
Any thoughts on what the default minimum should be, if the default max
is 1.1GB/64?
> 1. Creating new segments when you need them is not free, so if you have
> a workload with occasion
>>> Default of 4 for min_wal_size?
>>
>> I assume you mean 4 segments; why not 3 as currently? As long as the
>> system has the latitude to ratchet it up when needed, there seems to
>> be little advantage to raising the minimum. Of course I guess there
>> must be some advantage or Heikki wouldn't
On Thu, Feb 5, 2015 at 4:42 PM, Heikki Linnakangas
wrote:
> Actually, perhaps we should have a boolean setting that just implies
> min=max, instead of having a configurable minimum?. That would cover all of
> those reasons pretty well. So we would have a "max_wal_size" setting, and a
> boolean "pr
On 02/05/2015 11:28 PM, Robert Haas wrote:
On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus wrote:
Default of 4 for min_wal_size?
I assume you mean 4 segments; why not 3 as currently? As long as the
system has the latitude to ratchet it up when needed, there seems to
be little advantage to raisin
On 02/05/2015 01:28 PM, Robert Haas wrote:
> On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus wrote:
>> Except that, when setting up servers for customers, one thing I pretty
>> much always do for them is temporarily increase checkpoint_segments for
>> the initial data load. So having Postgres do this
On Thu, Feb 5, 2015 at 2:11 PM, Josh Berkus wrote:
> Except that, when setting up servers for customers, one thing I pretty
> much always do for them is temporarily increase checkpoint_segments for
> the initial data load. So having Postgres do this automatically would
> be a feature, not a bug.
On 02/04/2015 04:16 PM, David Steele wrote:
> On 2/4/15 3:06 PM, Robert Haas wrote:
>>> Hmmm, I see your point. I spend a lot of time on AWS and in
>>> container-world, where disk space is a lot more constrained. However,
>>> it probably makes more sense to recommend non-default settings for that
On 02/05/2015 04:47 PM, Andres Freund wrote:
On 2015-02-05 09:42:37 -0500, Robert Haas wrote:
I previously proposed 100 segments, or 1.6GB. If that seems too
large, how about 64 segments, or 1.024GB? I think there will be few
people who can't tolerate a gigabyte of xlog under peak load, and an
On 2015-02-05 09:42:37 -0500, Robert Haas wrote:
> I previously proposed 100 segments, or 1.6GB. If that seems too
> large, how about 64 segments, or 1.024GB? I think there will be few
> people who can't tolerate a gigabyte of xlog under peak load, and an
> awful lot who will benefit from it.
It
On Wed, Feb 4, 2015 at 4:41 PM, Josh Berkus wrote:
>> That's certainly better, but I think we should go further. Again,
>> you're not committed to using this space all the time, and if you are
>> using it you must have a lot of write activity, which means you are
>> not running on a tin can and a
Missed adding "pgsql-hackers" group while replying.
Regards,
Venkata Balaji N
On Thu, Feb 5, 2015 at 12:48 PM, Venkata Balaji N wrote:
>
>
> On Fri, Jan 30, 2015 at 7:58 PM, Heikki Linnakangas <
> hlinnakan...@vmware.com> wrote:
>
>> On 01/30/2015 04:48 AM, Venkata Balaji N wrote:
>>
>>> I perf
On Thu, Feb 5, 2015 at 3:11 AM, Josh Berkus wrote:
>
> On 02/04/2015 12:06 PM, Robert Haas wrote:
> > On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus wrote:
> >> Let me push "max_wal_size" and "min_wal_size" again as our new
parameter
> >> names, because:
> >>
> >> * does what it says on the tin
> >>
On 2/4/15 6:16 PM, David Steele wrote:
On 2/4/15 3:06 PM, Robert Haas wrote:
Hmmm, I see your point. I spend a lot of time on AWS and in
container-world, where disk space is a lot more constrained. However,
it probably makes more sense to recommend non-default settings for that
environment, si
On 2/4/15 3:06 PM, Robert Haas wrote:
>> Hmmm, I see your point. I spend a lot of time on AWS and in
>> container-world, where disk space is a lot more constrained. However,
>> it probably makes more sense to recommend non-default settings for that
>> environment, since it requires non-default se
On 02/04/2015 12:06 PM, Robert Haas wrote:
> On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus wrote:
>> Let me push "max_wal_size" and "min_wal_size" again as our new parameter
>> names, because:
>>
>> * does what it says on the tin
>> * new user friendly
>> * encourages people to express it in MB, not
On Wed, Feb 4, 2015 at 1:05 PM, Josh Berkus wrote:
> Let me push "max_wal_size" and "min_wal_size" again as our new parameter
> names, because:
>
> * does what it says on the tin
> * new user friendly
> * encourages people to express it in MB, not segments
> * very different from the old name, so
On 02/04/2015 09:28 AM, Robert Haas wrote:
> On Tue, Feb 3, 2015 at 4:18 PM, Josh Berkus wrote:
That's different from our current checkpoint_segments setting. With
checkpoint_segments, the documented formula for calculating the disk usage
is (2 + checkpoint_completion_target) * chec
On Tue, Feb 3, 2015 at 4:18 PM, Josh Berkus wrote:
>>> That's different from our current checkpoint_segments setting. With
>>> checkpoint_segments, the documented formula for calculating the disk usage
>>> is (2 + checkpoint_completion_target) * checkpoint_segments * 16 MB. That's
>>> a lot less i
On 02/03/2015 07:50 AM, Robert Haas wrote:
> On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
> wrote:
>> That's the whole point of this patch. "max_checkpoint_segments = 10", or
>> "max_checkpoint_segments = 160 MB", means that the system will begin a
>> checkpoint so that when the checkpoint
On 03/02/15 16:50, Robert Haas wrote:
On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
wrote:
That's the whole point of this patch. "max_checkpoint_segments = 10", or
"max_checkpoint_segments = 160 MB", means that the system will begin a
checkpoint so that when the checkpoint completes, and
On Tue, Feb 3, 2015 at 10:44 AM, Heikki Linnakangas
wrote:
>>> Works for me. However, note that "max_checkpoint_segments = 10" doesn't
>>> mean
>>> the same as current "checkpoint_segments = 10". With checkpoint_segments
>>> =
>>> 10 you end up with about 2x-3x as much WAL as with
>>> max_checkpoi
On 02/03/2015 05:19 PM, Robert Haas wrote:
On Tue, Feb 3, 2015 at 7:31 AM, Heikki Linnakangas
wrote:
On 02/02/2015 03:36 PM, Robert Haas wrote:
Second, I*think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named symmetrically.
Basically, I think
On Tue, Feb 3, 2015 at 7:31 AM, Heikki Linnakangas
wrote:
> On 02/02/2015 03:36 PM, Robert Haas wrote:
>> Second, I*think* that these settings are symmetric and, if that's
>> right, then I suggest that they ought to be named symmetrically.
>> Basically, I think you've got min_checkpoint_segments
Heikki Linnakangas wrote:
> On 02/02/2015 04:21 PM, Andres Freund wrote:
>> On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
>>> Also, I'd like to propose that we set the default value of
>>> max_checkpoint_segments/checkpoint_wal_size to something at
>>> least an order of magnitude larger than th
On 02/02/2015 03:36 PM, Robert Haas wrote:
Second, I*think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named symmetrically.
Basically, I think you've got min_checkpoint_segments (the number of
recycled segments we keep around always) and max_chec
On 02/02/2015 04:21 PM, Andres Freund wrote:
Hi,
On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at least an
order of magnitude larger than the current default setting.
+1
I
Hi,
On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
> Also, I'd like to propose that we set the default value of
> max_checkpoint_segments/checkpoint_wal_size to something at least an
> order of magnitude larger than the current default setting.
+1
I think we need to increase checkpoint_timeout
On Fri, Jan 30, 2015 at 3:58 AM, Heikki Linnakangas
wrote:
>> During my tests, I did not observe the significance of
>> min_recycle_wal_size
>> parameter yet. Ofcourse, i had sufficient disk space for pg_xlog.
>>
>> I would like to understand more about "min_recycle_wal_size" parameter. In
>> theo
On 01/30/2015 04:48 AM, Venkata Balaji N wrote:
I performed series of tests for this patch and would like to share the
results. My comments are in-line.
Thanks for the testing!
*Test 1 :*
In this test, i see removed+recycled segments = 3 (except for the first 3
checkpoint cycles) and has bee
Hi,
I really like the idea of tuning checkpoint segments based on disk space
usage.
I performed series of tests for this patch and would like to share the
results. My comments are in-line.
To start with, I applied this patch to the master successfully -
> But ... do I understand things correc
On Fri, Jan 2, 2015 at 3:27 PM, Heikki Linnakangas
wrote:
> On 01/01/2015 03:24 AM, Josh Berkus wrote:
>
>> Please remind me because I'm having trouble finding this in the
>> archives: how does wal_keep_segments interact with the new settings?
>>
>
> It's not very straightforward. First of all, m
On 01/05/2015 09:06 AM, Heikki Linnakangas wrote:
> I wasn't clear on my opinion here. I think I understood what Josh meant,
> but I don't think we should do it. Seems like unnecessary nannying of
> the DBA. Let's just mention in the manual that if you set
> wal_keep_segments higher than [insert fo
On 01/05/2015 12:06 PM, Andres Freund wrote:
On 2015-01-05 11:34:54 +0200, Heikki Linnakangas wrote:
On 01/04/2015 11:44 PM, Josh Berkus wrote:
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_k
On 2015-01-05 11:34:54 +0200, Heikki Linnakangas wrote:
> On 01/04/2015 11:44 PM, Josh Berkus wrote:
> >On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
> >>On 01/03/2015 12:28 AM, Josh Berkus wrote:
> >>>On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
> wal_keep_segments does not affect the
On 01/04/2015 11:44 PM, Josh Berkus wrote:
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high e
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
> On 01/03/2015 12:28 AM, Josh Berkus wrote:
>> On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
>>> wal_keep_segments does not affect the calculation of CheckPointSegments.
>>> If you set wal_keep_segments high enough, checkpoint_wal_size will be
On 01/03/2015 12:28 AM, Josh Berkus wrote:
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
wal_keep_segments does not affect the calculation of CheckPointSegments.
If you set wal_keep_segments high enough, checkpoint_wal_size will be
exceeded. The other alternative would be to force a checkpoi
On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
> wal_keep_segments does not affect the calculation of CheckPointSegments.
> If you set wal_keep_segments high enough, checkpoint_wal_size will be
> exceeded. The other alternative would be to force a checkpoint earlier,
> i.e. lower CheckPointSegme
On 01/01/2015 03:24 AM, Josh Berkus wrote:
Please remind me because I'm having trouble finding this in the
archives: how does wal_keep_segments interact with the new settings?
It's not very straightforward. First of all, min_recycle_wal_size has a
different effect than wal_keep_segments. Raisi
Heikki,
Thanks for getting back to this! I really look forward to simplifying
WAL tuning for users.
>>> min_recycle_wal_size
>>> checkpoint_wal_size
>>>
>>
>>
>>> These settings are fairly intuitive for a DBA to tune. You begin by
>>> figuring out how much disk space you can afford to spend on
On 09/01/2013 10:37 AM, Amit Kapila wrote:
On Sat, Aug 24, 2013 at 2:38 AM, Heikki Linnakangas
wrote:
a.
In XLogFileInit(),
/*
! * XXX: What should we use as max_segno? We used to use XLOGfileslop when
! * that was a constant, but that was always a bit dubious: normally, at a
! *
(reviving an old thread)
On 08/24/2013 12:53 AM, Josh Berkus wrote:
On 08/23/2013 02:08 PM, Heikki Linnakangas wrote:
Here's a bigger patch, which does more. It is based on the ideas in the
post I started this thread with, with feedback incorporated from the
long discussion. With this patch, W
On Sat, Aug 24, 2013 at 12:08:30AM +0300, Heikki Linnakangas wrote:
> You can also set min_recycle_wal_size = checkpoint_wal_size, which
> gets you the same behavior as without the patch, except that it's
> more intuitive to set it in terms of "MB of WAL space required",
> instead of "# of segments
On Sat, Aug 24, 2013 at 2:38 AM, Heikki Linnakangas
wrote:
> On 03.07.2013 21:28, Peter Eisentraut wrote:
>>
>> On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
>>>
>>> Here's a patch implementing that. Docs not updated yet. I did not change
>>> the way checkpoint_segments triggers checkpoints - that'
On 08/23/2013 02:08 PM, Heikki Linnakangas wrote:
> Here's a bigger patch, which does more. It is based on the ideas in the
> post I started this thread with, with feedback incorporated from the
> long discussion. With this patch, WAL disk space usage is controlled by
> two GUCs:
>
> min_recycle_
On 03.07.2013 21:28, Peter Eisentraut wrote:
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
Here's a patch implementing that. Docs not updated yet. I did not change
the way checkpoint_segments triggers checkpoints - that'll can be a
separate patch. This only decouples the segment preallocation beh
On 07/03/2013 11:28 AM, Peter Eisentraut wrote:
> On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
> I don't understand what this patch, by itself, will accomplish in terms
> of the originally stated goals of making checkpoint_segments easier to
> tune, and controlling disk space used. To some degree,
On 6/6/13 4:09 PM, Heikki Linnakangas wrote:
> On 06.06.2013 20:24, Josh Berkus wrote:
>>> Yeah, something like that :-). I was thinking of letting the estimate
>>> decrease like a moving average, but react to any increases immediately.
>>> Same thing we do in bgwriter to track buffer allocations:
On 06/06/2013 03:21 PM, Joshua D. Drake wrote:
>
> Not to be unkind but the problems of the uniformed certainly are not
> the problems of the informed. Or perhaps they are certainly the
> problems of the informed :P.
I'm not convinced that's a particularly good argument not to improve
something. Su
On 06/07/2013 01:00 AM, Josh Berkus wrote:
> Daniel,
>
> So your suggestion is that if archiving is falling behind, we should
> introduce delays on COMMIT in order to slow down the rate of WAL writing?
Delaying commit wouldn't be enough; consider a huge COPY, which can
produce a lot of WAL at a hig
On 6/7/13 2:43 PM, Robert Haas wrote:
name. What I would like to see is a single number here in memory units that
replaces both checkpoint_segments and wal_keep_segments.
This isn't really making sense to me. I don't think we should assume
that someone who wants to keep WAL around for replica
Robert Haas wrote:
> Kevin Grittner wrote:
>> One such surprise was that the conversion ran faster, even on a
>> "largish" database of around 200GB, with 3 checkpoint_segments
>> than with larger settings.
>
> !
>
> I can't account for that finding, because my experience is that
> small checkpoi
On Fri, Jun 7, 2013 at 3:14 PM, Kevin Grittner wrote:
> Some findings were unsurprising, like that a direct connection
> between the servers using a cross-wired network patch cable was
> faster than plugging both machines into the same switch. But we
> tested all of our assumptions, and re-tested
Robert Haas wrote:
> (As to why smaller checkpoint_segments can help, here's my guess:
> if checkpoint_segments is relatively small, then when we recycle
> a segment we're likely to find its data already in cache. That's
> a lot better than reading it back in from disk just to overwrite
> the da
On Thu, Jun 6, 2013 at 10:43 PM, Greg Smith wrote:
> The general complaint the last time I suggested a change in this area, to
> make checkpoint_segments larger for the average user, was that some people
> had seen workloads where that was counterproductive. Pretty sure Kevin
> Grittner said he'd
On 6/6/13 4:41 AM, Heikki Linnakangas wrote:
I was thinking of letting the estimate
decrease like a moving average, but react to any increases immediately.
Same thing we do in bgwriter to track buffer allocations:
Combine what your submitted patch does and this idea, and you'll have
something
On 6/6/13 4:42 AM, Joshua D. Drake wrote:
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will
overwhelm the I/O subsystem, and slow down queries. That's what we saw
b
>> Given the behavior of xlog, I'd want to adjust the
>> algo so that peak usage on a 24-hour basis would affect current
>> preallocation. That is, if a site regularly has a peak from 2-3pm where
>> they're using 180 segments/cycle, then they should still be somewhat
>> higher at 2am than a datab
On 06.06.2013 20:24, Josh Berkus wrote:
Yeah, something like that :-). I was thinking of letting the estimate
decrease like a moving average, but react to any increases immediately.
Same thing we do in bgwriter to track buffer allocations:
Seems reasonable.
Here's a patch implementing that. D
On Thu, Jun 6, 2013 at 1:42 AM, Joshua D. Drake wrote:
>
>
> I may be confused but it is my understanding that bgwriter writes out the
> data from the shared buffer cache that is dirty based on an interval and a
> max pages written.
It primarily writes out based on how many buffers have recently
On Wed, Jun 5, 2013 at 8:20 PM, Joshua D. Drake wrote:
>
> On 06/05/2013 05:37 PM, Robert Haas wrote:
>
> - If it looks like we're going to exceed limit #3 before the
>> checkpoint completes, we start exerting back-pressure on writers by
>> making them wait every time they write WAL, probably in
>> Then I suggest we not use exactly that name. I feel quite sure we
>> would get complaints from people if something labeled as "max" was
>> exceeded -- especially if they set that to the actual size of a
>> filesystem dedicated to WAL files.
>
> You're probably right. Any suggestions for a bet
Daniel,
So your suggestion is that if archiving is falling behind, we should
introduce delays on COMMIT in order to slow down the rate of WAL writing?
Just so I'm clear.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgres
On 06.06.2013 15:31, Kevin Grittner wrote:
Heikki Linnakangas wrote:
On 05.06.2013 22:18, Kevin Grittner wrote:
Heikki Linnakangas wrote:
I was not thinking of making it a hard limit. It would be just
like checkpoint_segments from that point of view - if a
checkpoint takes a long time, max
Heikki Linnakangas wrote:
> On 05.06.2013 22:18, Kevin Grittner wrote:
>> Heikki Linnakangas wrote:
>>
>>> I was not thinking of making it a hard limit. It would be just
>>> like checkpoint_segments from that point of view - if a
>>> checkpoint takes a long time, max_wal_size might still be
>>> e
On 05.06.2013 22:24, Fujii Masao wrote:
On Thu, Jun 6, 2013 at 3:35 AM, Heikki Linnakangas
wrote:
The checkpoint spreading code already tracks if the checkpoint is "on
schedule", and it takes into account both checkpoint_timeout and
checkpoint_segments. Ie. if you consume segments faster than
On 05.06.2013 22:18, Kevin Grittner wrote:
Heikki Linnakangas wrote:
I was not thinking of making it a hard limit. It would be just
like checkpoint_segments from that point of view - if a
checkpoint takes a long time, max_wal_size might still be
exceeded.
Then I suggest we not use exactly th
On 06.06.2013 11:42, Joshua D. Drake wrote:
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
Yes checkpoint_segments is awkward. We shouldn't have to set it at all.
It should be gone.
The point of having checkpoint_segments or max_wal_size is to put a
limit (albeit a soft one) on the amount of d
On 6/6/2013 1:11 AM, Heikki Linnakangas wrote:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will
overwhelm the I/O subsystem, and slow down queries. That's what we saw
before spread checkpoints: when a checkpo
On 05.06.2013 23:16, Josh Berkus wrote:
For limiting the time required to recover after crash,
checkpoint_segments is awkward because it's difficult to calculate how
long recovery will take, given checkpoint_segments=X. A bulk load can
use up segments really fast, and recovery will be fast, while
On 06.06.2013 06:20, Joshua D. Drake wrote:
3. The spread checkpoints have always confused me. If anything we want a
checkpoint to be fast and short because:
(I'm sure you know this, but:) If you perform a checkpoint as fast and
short as possible, the sudden burst of writes and fsyncs will ove
On 6/5/2013 11:31 PM, Peter Geoghegan wrote:
On Wed, Jun 5, 2013 at 11:28 PM, Joshua D. Drake wrote:
I have zero doubt that in your case it is true and desirable. I just don't
know that it is a positive solution to the problem as a whole. Your case is
rather limited to your environment, which
On 6/5/2013 11:25 PM, Harold Giménez wrote:
Instead of "running out of disk space PANIC" we should just write
to an emergency location within PGDATA
This merely buys you some time, but with aggressive and sustained
write throughput you are left on the same spot. Practically speaking
1 - 100 of 120 matches
Mail list logo