On Thu, Feb 4, 2016 at 4:42 PM, Andres Freund wrote:
> On 2016-02-03 09:57:00 -0500, Robert Haas wrote:
>> On Mon, Feb 1, 2016 at 7:43 PM, Andres Freund wrote:
>> > I wonder if this essentially point at checkpoint_timeout being wrongly
>> > defined: Currently it means we'll try to finish a checkp
On 2016-02-03 09:57:00 -0500, Robert Haas wrote:
> On Mon, Feb 1, 2016 at 7:43 PM, Andres Freund wrote:
> > I wonder if this essentially point at checkpoint_timeout being wrongly
> > defined: Currently it means we'll try to finish a checkpoint
> > (1-checkpoint_completion_target) * timeout before
On Mon, Feb 1, 2016 at 7:43 PM, Andres Freund wrote:
> Right now it takes checkpoint_timeout till we start a checkpoint, and
> checkpoint_timeout + checkpoint_timeout * checkpoint_completion_target
> till we complete the first checkpoint after shutdown/forced checkpoints.
>
> That means a) that su
Hi,
Right now it takes checkpoint_timeout till we start a checkpoint, and
checkpoint_timeout + checkpoint_timeout * checkpoint_completion_target
till we complete the first checkpoint after shutdown/forced checkpoints.
That means a) that such checkpoint will often be bigger/more heavyweight
than t
On 2015-06-10 11:20:19 +1200, Thomas Munro wrote:
> I was wondering about this in the context of the recent multixact
> work, since such configurations could leave you with different SLRU
> files on disk which in some versions might change the behaviour in
> interesting ways.
Note that trigger a r
On Wed, Jun 10, 2015 at 9:33 AM, Bruce Momjian wrote:
> Ah, so even thought standbys don't have to write WAL, they are fsyncing
> shared buffers. Where is the restart point recorded, in pg_controldata?
> c
Yep. Latest checkpoint's REDO location, or
ControlFile->checkPointCopy.redo. During recover
On Tue, Jun 9, 2015 at 05:20:23PM -0700, Jeff Janes wrote:
> On Tue, Jun 9, 2015 at 4:20 PM, Thomas Munro
> wrote:
>
> Hi
>
> Why do standby servers not simply treat every checkpoint as a
> restartpoint? As I understand it, setting checkpoint_timeout and
> checkpoint_segments h
On Tue, Jun 9, 2015 at 4:20 PM, Thomas Munro
wrote:
> Hi
>
> Why do standby servers not simply treat every checkpoint as a
> restartpoint? As I understand it, setting checkpoint_timeout and
> checkpoint_segments higher on a standby server effectively instruct
> standby servers to skip some check
Hi
Why do standby servers not simply treat every checkpoint as a
restartpoint? As I understand it, setting checkpoint_timeout and
checkpoint_segments higher on a standby server effectively instruct
standby servers to skip some checkpoints. Even with the same settings
on both servers, the server
On Thu, Oct 6, 2011 at 7:18 PM, Robert Haas wrote:
> As of 9.1, we already have something very much like this, in the
> opposite direction.
Yes Robert, I wrote it.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent
Simon Riggs wrote:
> I foresee a function that tells you the delay based on a protocol
> message of 'k' for keepalive.
If the delay you mention is basically a "ping" time or something
similar, that would answer the need I've been on about. We need to
know, based on access to the replica, that
On Thu, Oct 6, 2011 at 2:26 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Oct 6, 2011 at 1:56 PM, Tom Lane wrote:
>>> I'm inclined to think that the way to deal with that is not to force out
>>> useless WAL data, but to add some sort of explicit "I'm alive" heartbeat
>>> signal to the wal
Robert Haas writes:
> On Thu, Oct 6, 2011 at 1:56 PM, Tom Lane wrote:
>> I'm inclined to think that the way to deal with that is not to force out
>> useless WAL data, but to add some sort of explicit "I'm alive" heartbeat
>> signal to the walsender/walreceiver protocol. The hard part of that is
On Thu, Oct 6, 2011 at 1:56 PM, Tom Lane wrote:
> Simon Riggs writes:
>> Do we want this backpatched? If so, suggest just 9.1 and 9.0?
>
> -1 for backpatching; it's more an improvement than a bug fix.
>
> In any case, I think we still need to respond to the point Kevin made
> about how to tell an
On Thu, Oct 6, 2011 at 6:56 PM, Tom Lane wrote:
> Simon Riggs writes:
>> Do we want this backpatched? If so, suggest just 9.1 and 9.0?
>
> -1 for backpatching; it's more an improvement than a bug fix.
OK, works for me.
> In any case, I think we still need to respond to the point Kevin made
> ab
On 06.10.2011 20:58, Tom Lane wrote:
Robert Haas writes:
On Thu, Oct 6, 2011 at 12:44 PM, Tom Lane wrote:
I think the point is that a totally idle database should not continue to
emit WAL, not even at a slow rate. There are also power-consumption
objections to allowing the checkpoint process
Robert Haas writes:
> On Thu, Oct 6, 2011 at 12:44 PM, Tom Lane wrote:
>> I think the point is that a totally idle database should not continue to
>> emit WAL, not even at a slow rate. There are also power-consumption
>> objections to allowing the checkpoint process to fire up to no purpose.
>
Simon Riggs writes:
> Do we want this backpatched? If so, suggest just 9.1 and 9.0?
-1 for backpatching; it's more an improvement than a bug fix.
In any case, I think we still need to respond to the point Kevin made
about how to tell an idle master from broken replication. Right now,
you will g
On Thu, Oct 6, 2011 at 5:06 PM, Tom Lane wrote:
> Simon Riggs writes:
>> The current idea is that if there has been no activity then we skip
>> checkpoint. But all it takes is a single WAL record and off we go with
>> another checkpoint. If there hasn't been much WAL activity, there is
>> not muc
Robert Haas wrote:
> Tom Lane wrote:
>> I think the point is that a totally idle database should not
>> continue to emit WAL, not even at a slow rate. There are also
>> power-consumption objections to allowing the checkpoint process
>> to fire up to no purpose.
>
> Hmm, OK. I still think it'
On Thu, Oct 6, 2011 at 12:44 PM, Tom Lane wrote:
> Robert Haas writes:
>> I'm not entirely sure I understand the rationale, though. I mean, if
>> very little has happened since the last checkpoint, then the
>> checkpoint will be very cheap. In the totally degenerate case Fujii
>> Masao is repor
Robert Haas writes:
> I'm not entirely sure I understand the rationale, though. I mean, if
> very little has happened since the last checkpoint, then the
> checkpoint will be very cheap. In the totally degenerate case Fujii
> Masao is reporting, where absolutely nothing has happened, it should
>
On Thu, Oct 6, 2011 at 12:06 PM, Tom Lane wrote:
> Simon Riggs writes:
>> The current idea is that if there has been no activity then we skip
>> checkpoint. But all it takes is a single WAL record and off we go with
>> another checkpoint. If there hasn't been much WAL activity, there is
>> not mu
Simon Riggs writes:
> The current idea is that if there has been no activity then we skip
> checkpoint. But all it takes is a single WAL record and off we go with
> another checkpoint. If there hasn't been much WAL activity, there is
> not much point in having another checkpoint record since there
On Wed, Oct 5, 2011 at 6:19 AM, Fujii Masao wrote:
> While the system is idle, we skip duplicate checkpoints for some
> reasons. But when wal_level is set to hot_standby, I found that
> checkpoints are wrongly duplicated even while the system is idle.
> The cause is that XLOG_RUNNING_XACTS WAL re
On Wed, Oct 5, 2011 at 1:19 AM, Fujii Masao wrote:
> While the system is idle, we skip duplicate checkpoints for some
> reasons. But when wal_level is set to hot_standby, I found that
> checkpoints are wrongly duplicated even while the system is idle.
> The cause is that XLOG_RUNNING_XACTS WAL rec
Hi,
While the system is idle, we skip duplicate checkpoints for some
reasons. But when wal_level is set to hot_standby, I found that
checkpoints are wrongly duplicated even while the system is idle.
The cause is that XLOG_RUNNING_XACTS WAL record always
follows CHECKPOINT one when wal_level is set
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> When we checkpoint we write out all dirty buffers. But ISTM we don't really
>> need to write out buffers which are dirty but which have an LSN older than
>> the
>> previous checkpoint. Those represent buffers wh
Gregory Stark <[EMAIL PROTECTED]> writes:
> When we checkpoint we write out all dirty buffers. But ISTM we don't really
> need to write out buffers which are dirty but which have an LSN older than the
> previous checkpoint. Those represent buffers which were dirtied by a
> non-wal-logged modificati
When we checkpoint we write out all dirty buffers. But ISTM we don't really
need to write out buffers which are dirty but which have an LSN older than the
previous checkpoint. Those represent buffers which were dirtied by a
non-wal-logged modification, ie, hint bit setting. The other non-wal-logge
> "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
BM> Vivek, you reported recently that increasing sort_mem and
BM> checkpoint_segments increased performance. Can you run a test to see
BM> how much of that improvement was just because of increasing
BM> checkpoint_segments?
i was thinking j
Vivek, you reported recently that increasing sort_mem and
checkpoint_segments increased performance. Can you run a test to see
how much of that improvement was just because of increasing
checkpoint_segments?
--
Bruce Momjian| http://candle.pha.pa.us
[EMAIL PROTECTED]
Hello,
I have written code to support multiple buffer pools in postgres 7.3.2.
Now i am looking at changing the sizes of these buffer pools, but first i
need to write all pages to disk.
I also need to incorporate this code into the backend instead of it being
a sql statement as it is now. I noticed
33 matches
Mail list logo