On 21.05.2013 00:00, Simon Riggs wrote:
When we set the new timeline we should be updating all values that
might be used elsewhere. If we do that, then no matter when or how we
run GetXLogReplayRecPtr, it can't ever get it wrong in any backend.
--- a/src/backend/access/transam/xlog.c
+++
On Monday, May 20, 2013 6:54 PM Robert Haas wrote:
On Thu, May 16, 2013 at 10:18 AM, Amit Kapila amit.kap...@huawei.com
wrote:
Further Performance Data:
Below data is for average 3 runs of 20 minutes
Scale Factor - 1200
Shared Buffers - 7G
These results are good but I don't get
On 21 May 2013 07:46, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 21.05.2013 00:00, Simon Riggs wrote:
When we set the new timeline we should be updating all values that
might be used elsewhere. If we do that, then no matter when or how we
run GetXLogReplayRecPtr, it can't ever get
On Tuesday, May 21, 2013 12:36 PM Amit Kapila wrote:
On Monday, May 20, 2013 6:54 PM Robert Haas wrote:
On Thu, May 16, 2013 at 10:18 AM, Amit Kapila
amit.kap...@huawei.com
wrote:
Further Performance Data:
Below data is for average 3 runs of 20 minutes
Scale Factor - 1200
Presumably we would want to repeat all of the ordinary commands, in the
file, but not any of the backslash set commands that precede any ordinary
commands. But what if backslash set commands are sprinkled between
ordinary commands?
See, this is why I had no intention of retrying. Since
On Tue, May 21, 2013 at 4:44 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 20 May 2013 20:06, Heikki Linnakangas hlinnakan...@vmware.com wrote:
It would be possible to redesign this with a special new reason, or we
could just use time as the reason, or we could just leave it.
Do nothing is
On 21 May 2013 15:29, Fujii Masao masao.fu...@gmail.com wrote:
Or, what about using CHECKPOINT_FORCE and just printing force?
Currently that checkpoint always starts because of existence of the
end-of-recovery record, but I think we should ensure that the checkpoint
always starts by using
We are seeing these errors on a regular basis on the testing box now. We
have even changed the backup script to
shutdown the hot standby, take lvm snapshot, restart the hot standby, rsync
the lvm snapshot. It still happens.
We have never seen this before we introduced the hot standby. So we
Hi,
We cannot run parallel pg_dump on the standby server because
pg_export_snapshot() always fails on the standby. Is this the oversight
of parallel pg_dump or pg_export_snapshot()?
pg_export_snapshot() fails in the standby because it always assigns
new XID and which is not allowed in the
On 21 May 2013 09:26, Simon Riggs si...@2ndquadrant.com wrote:
I'm OK with that principle...
Well, after fighting some more with that, I've gone with the, er,
principle of slightly less ugliness.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.
Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or
On 22/05/13 09:13, Simon Riggs wrote:
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.
Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
Re: Andrew Dunstan 2013-05-17 51964770.6070...@dunslane.net
I have reproduced this. It happens with both the distro perl and a
home-built perl 5.14. AFAICT this is a Perl bug. Any reference at
all to ERRSV at the point this occurs causes a core dump, even just
assigning it to a local SV *
We've had a number of discussions about the evils of SnapshotNow. As
far as I can tell, nobody likes it and everybody wants it gone, but
there is concern about the performance impact. I decided to do some
testing to measure the impact. I was pleasantly surprised by the
results.
The attached
14 matches
Mail list logo