On 8 January 2014 07:43, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 01/08/2014 08:56 AM, Simon Riggs wrote:
Current freesapce code gives a new block insert target (NBT) from
anywhere in table. That isn't very useful with bigger tables and it
would be useful to be able to specify
VACUUM cleans up blocks, which is nice because it happens offline in a
lazy manner.
We also make SELECT clean up blocks as it goes. That is useful in OLTP
workloads, but it means that large SQL queries and pg_dump effectively
do much the same work as VACUUM, generating huge amounts of I/O and
WAL
Dean,
Short version
-
Looks amazing overall. Very clever to zip up the s.b. quals, let the
rest of the rewriter and planer do their work normally, then unpack them
into subqueries inserted in the planner once inheritance appendrels are
expanded, etc.
My main concern is that the
On 11/13/2013 03:09 PM, Rajeev rastogi wrote:
This patch implements the following TODO item:
Add a new eager synchronous mode that starts out synchronous but reverts to
asynchronous after a failure timeout period
This would require some type of command to be executed to alert administrators
Re: Tom Lane 2014-01-07 14979.1389112...@sss.pgh.pa.us
But if extra_float_digits 0 is set, I'd expect not only the float4
output to be affected by it, but also casts to other datatypes,
This proposal scares me. extra_float_digits is strictly a matter of
I/O representation, it does not
On 2014-01-08 11:07:48 +0200, Heikki Linnakangas wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous replication when the standby goes down is
completely bonkers. If you don't need the strong guarantee that your
transaction is safe in at
On Tue, Jan 7, 2014 at 10:42 PM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
How did you set up the standby? Did you initialize it from an offline
backup of the master's data directory, perhaps? The log shows that the
startup took the the crash recovery first, then start archive
On 01/08/2014 02:32 PM, Matheus de Oliveira wrote:
On Tue, Jan 7, 2014 at 10:42 PM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
How did you set up the standby? Did you initialize it from an offline
backup of the master's data directory, perhaps? The log shows that the
startup took
On 2014-01-08 14:37:34 +0200, Heikki Linnakangas wrote:
That seems to be a very common mistake to make. I wish we could do something
about it. Do you think it would've helped in your case if there was a big
fat warning in the beginning of backup_label, along the lines of: # DO NOT
REMOVE THIS
On 2014-01-07 22:42:59 -0200, Matheus de Oliveira wrote:
@andres, if it is really removing backup_label it could also cause that
other issue we saw on Monday, right? (yes I did run the same script).
It might be in your case since that's an easy to way to generate that
situation, but there have
On 01/08/2014 07:29 AM, Greg Stark wrote:
On Tue, Jan 7, 2014 at 11:36 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Hmm. The xlogdump indeed shows that the order of 'clean' and 'visible' is
incorrect, but I don't immediately see how that could cause the PANIC. Why
is the page
The docs say:
At most one of recovery_target_time, recovery_target_name or
recovery_target_xid can be specified
However, the code actually allows them all to be specified at the same time:
else if (strcmp(item-name, recovery_target_name) == 0)
{
Heikki Linnakangas hlinnakan...@vmware.com writes:
The docs say:
At most one of recovery_target_time, recovery_target_name or
recovery_target_xid can be specified
However, the code actually allows them all to be specified at the same time:
else if (strcmp(item-name, recovery_target_name)
On 2014-01-07 17:54:21 +0100, Andres Freund wrote:
On 2013-12-12 16:49:33 +0100, Andres Freund wrote:
On 2013-12-12 10:01:21 -0500, Robert Haas wrote:
On Thu, Dec 12, 2013 at 7:04 AM, Andres Freund and...@2ndquadrant.com
wrote:
As far as I have been thinking of, this would be another
Heikki Linnakangas hlinnakan...@vmware.com writes:
On 01/08/2014 08:56 AM, Simon Riggs wrote:
* IN-MEMORY
A large table may only have some of its blocks in memory. It would be
useful to force a NBT to be a block already in shared_buffers IFF a
table is above a certain size (use same threshold
Hello,
could you tell me where I can find the latest version of the MERGE PATCH
file? I need to use it on PostgreSQL 9.3.
I couldn't find it anywhere in git.
Regards,
Nikolay
--
View this message in context:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous replication when the standby goes down is
completely bonkers.
Agreed
We had this discussion across 3 months and we
On 4 January 2014 13:01, Gabriele Bartolini
gabriele.bartol...@2ndquadrant.it wrote:
I'd suggest making the view on top of an SRF like pg_stat_replication
and pg_stat_activity (for example), instead of a whole lot of separate
function calls like the older stats views.
Ok, good idea.
Not
On 8 January 2014 15:38, Heikki Linnakangas hlinnakan...@vmware.com wrote:
The docs say:
At most one of recovery_target_time, recovery_target_name or
recovery_target_xid can be specified
However, the code actually allows them all to be specified at the same time:
else if
On Wed, Jan 8, 2014 at 6:42 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 4 January 2014 13:01, Gabriele Bartolini
gabriele.bartol...@2ndquadrant.it wrote:
I'd suggest making the view on top of an SRF like pg_stat_replication
and pg_stat_activity (for example), instead of a whole lot of
On 25 December 2013 12:01, Andreas Joseph Krogh andr...@officenet.no wrote:
Ref:
http://wiki.postgresql.org/wiki/BDR_Project
Is implementing main BDR features into core Postgres a probable objective to
version 9.4?
I've not updated hackers recently on this point, so thanks for asking
the
The fact that a final function may be called
multiple times does not appear to be mentioned
in the docs, and a reasonable reading of the
docs gives the alternate impression. For
people writing final functions in C, there
should be some warning about changing the
transition value, such as exists
Mark Dilger markdil...@yahoo.com writes:
The fact that a final function may be called
multiple times does not appear to be mentioned
in the docs,
It's in xaggr.sgml, near the discussion of AggCheckCallContext
(the end of section 35.10, in current HEAD's section numbering).
In general, people
Hackers,
Per the Developer Meeting, we are scheduled to do a final triage of 9.4
patches the week before CF4 starts, which is *now*. The goal of this
triage is to divide patches already in queue into 5 groups:
1) Good To Go: patches which are 100% ready for final testing and commit.
2) Needs a
On Tue, Jan 7, 2014 at 10:20 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Jan 7, 2014 at 2:46 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 6, 2014 at 4:04 PM, james ja...@mansionfamily.plus.com wrote:
The point remains that you need to duplicate it into every process that
På onsdag 08. januar 2014 kl. 18:57:52, skrev Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com: On 25 December 2013 12:01, Andreas Joseph
Krogh andr...@officenet.no wrote:
Ref:
http://wiki.postgresql.org/wiki/BDR_Project
Is implementing main BDR features into core Postgres
On Mon, Jan 6, 2014 at 11:37 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Jan 7, 2014 at 12:52 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 6, 2014 at 9:48 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Couldn't we also handle this by postponing FreeConfigVariables until
On Wed, Jan 08, 2014 at 10:45:37AM -0800, Josh Berkus wrote:
Hackers,
Per the Developer Meeting, we are scheduled to do a final triage of 9.4
patches the week before CF4 starts, which is *now*. The goal of this
triage is to divide patches already in queue into 5 groups:
With utmost
* David Fetter (da...@fetter.org) wrote:
On Wed, Jan 08, 2014 at 10:45:37AM -0800, Josh Berkus wrote:
Per the Developer Meeting, we are scheduled to do a final triage of 9.4
patches the week before CF4 starts, which is *now*. The goal of this
triage is to divide patches already in queue
On 01/08/2014 11:07 AM, David Fetter wrote:
On Wed, Jan 08, 2014 at 10:45:37AM -0800, Josh Berkus wrote:
Hackers,
Per the Developer Meeting, we are scheduled to do a final triage of 9.4
patches the week before CF4 starts, which is *now*. The goal of this
triage is to divide patches already
On 01/08/2014 10:51 PM, Robert Haas wrote:
On Tue, Jan 7, 2014 at 10:20 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Jan 7, 2014 at 2:46 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 6, 2014 at 4:04 PM, james ja...@mansionfamily.plus.com wrote:
The point remains that you
David Fetter da...@fetter.org writes:
With utmost respect, there are unsent patches which don't fit into the
categories below. Are you saying now that the cut-off is today, and
not actually 1/15, as everyone understands it to be?
I think Josh is merely suggesting that we could start triaging
On 2014-01-08 14:50:52 -0500, Tom Lane wrote:
David Fetter da...@fetter.org writes:
With utmost respect, there are unsent patches which don't fit into the
categories below. Are you saying now that the cut-off is today, and
not actually 1/15, as everyone understands it to be?
I think
What about doing the triage the first two days or so of the CF? Then we
know which patches have been submitted/updated and we haven't yet
performed too many reviews of patches which aren't going to make it.
If we triage the patches we already have NOW, then it'll be possible to
triage the
On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous replication when the standby goes down is
On Jan 8, 2014, at 9:27 PM, Bruce Momjian wrote:
On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous
Alvaro Herrera escribió:
Robert Haas escribió:
I think this direction has some potential. I'm not sure it's right in
detail. The exact scheme you propose above won't work if you want to
leave out the schema name altogether, and more generally it's not
going to help very much with
On 01/08/2014 10:27 PM, Bruce Momjian wrote:
On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous
On Jan7, 2014, at 20:11 , Kevin Grittner kgri...@ymail.com wrote:
Yeah, neither of the provided examples rolled back the read only
transaction itself;
Actually, the fixed version [1] of my example does.
[1]
http://www.postgresql.org/message-id/8721aad3-7a3a-4576-b10e-f2cbd1e53...@phlo.org
On Wed, Jan 8, 2014 at 10:46:51PM +0200, Heikki Linnakangas wrote:
On 01/08/2014 10:27 PM, Bruce Momjian wrote:
On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I
Florian Pflug f...@phlo.org wrote:
On Jan7, 2014, at 20:11 , Kevin Grittner kgri...@ymail.com wrote:
Yeah, neither of the provided examples rolled back the read only
transaction itself;
Actually, the fixed version [1] of my example does.
[1]
Bruce Momjian br...@momjian.us wrote:
Heikki Linnakangas wrote:
They want to have the cake and eat it too. But they're not
actually getting that. What they actually get is extra latency
when things work, with no gain in durability.
They are getting guaranteed durability until they get a
Kevin Grittner kgri...@ymail.com writes:
I'm torn on whether we should cave to popular demand on this; but
if we do, we sure need to be very clear in the documentation about
what a successful return from a commit request means. Sooner or
later, Murphy's Law being what it is, if we do this
On 2014-01-08 13:34:08 -0800, Kevin Grittner wrote:
On the other hand, we keep getting people saying they want the
database to make the promise of synchronous replication, and tell
applications that it has been successful even when it hasn't been,
as long as there's a line in the server log to
On 01/08/2014 12:27 PM, Bruce Momjian wrote:
I am glad Heikki and Simon agree, but I don't. ;-)
The way that I understand it is that you might want durability, but
might not want to sacrifice availability. Phrased that way, it makes
sense, and notifying the administrator seems the
On 01/08/2014 01:34 PM, Kevin Grittner wrote:
I'm torn on whether we should cave to popular demand on this; but
if we do, we sure need to be very clear in the documentation about
what a successful return from a commit request means. Sooner or
later, Murphy's Law being what it is, if we do
On 01/08/2014 11:37 PM, Andres Freund wrote:
On 2014-01-08 13:34:08 -0800, Kevin Grittner wrote:
On the other hand, we keep getting people saying they want the
database to make the promise of synchronous replication, and tell
applications that it has been successful even when it hasn't been,
as
Josh Berkus j...@agliodbs.com writes:
If we really want auto-degrading sync rep, then we'd (at a minimum) need
a way to determine *from the replica* whether or not it was in degraded
mode when the master died. What good do messages to the master log do
you if the master no longer exists?
How
Joshua D. Drake j...@commandprompt.com writes:
However, if the subscriber is down, the origin should NEVER wait. That
is just silly behavior and makes synchronous replication pretty much
useless. Machines go down, that is the nature of things. Yes, we should
log and log loudly if the
Andres Freund and...@2ndquadrant.com wrote:
On 2014-01-08 13:34:08 -0800, Kevin Grittner wrote:
On the other hand, we keep getting people saying they want the
database to make the promise of synchronous replication, and
tell applications that it has been successful even when it
hasn't been,
On Mon, Jan 6, 2014 at 12:35 PM, Amit Langote amitlangot...@gmail.comwrote:
On Sat, Dec 21, 2013 at 4:36 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Yet another version. The encoding/decoding code is now quite isolated in
ginpostinglist.c, so it's easy to experiment with
Anyone else?
Or you'll have to deal with me again?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello
I don't like this direction. What we can do with JSON from plpgsql? More,
JSON is not too robust format against some future changes.
Regards
Pavel
Dne 8.1.2014 21:43 Alvaro Herrera alvhe...@2ndquadrant.com napsal(a):
Alvaro Herrera escribió:
Robert Haas escribió:
I think this
Pavel Stehule escribió:
Hello
I don't like this direction. What we can do with JSON from plpgsql?
We have plenty of JSON functions and operators in SQL, and more to come
soon. Is that not enough?
More, JSON is not too robust format against some future changes.
Not sure what you mean.
On 01/08/2014 01:55 PM, Tom Lane wrote:
Sync mode is about providing a guarantee that the data exists on more than
one server *before* we tell the client it's committed. If you don't need
that guarantee, you shouldn't be using sync mode. If you do need it,
it's not clear to me why you'd
On 2014-01-08 14:23:34 -0800, Joshua D. Drake wrote:
On 01/08/2014 01:55 PM, Tom Lane wrote:
Sync mode is about providing a guarantee that the data exists on more than
one server *before* we tell the client it's committed. If you don't need
that guarantee, you shouldn't be using sync
On 01/08/2014 02:04 PM, Peter Eisentraut wrote:
Anyone else?
Or you'll have to deal with me again?
I vote for Peter.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 01/08/2014 02:34 PM, Andres Freund wrote:
I don't think you've understood synchronous replication. There wouldn't
be *any* benefit to using it if it worked the way you wish since there
wouldn't be any additional guarantees. A single reconnect of the
streaming rep connection, without any
On 01/08/2014 02:38 PM, Josh Berkus wrote:
On 01/08/2014 02:04 PM, Peter Eisentraut wrote:
Anyone else?
Or you'll have to deal with me again?
I vote for Peter.
+1
JD
--
Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579
PostgreSQL Support, Training, Professional
On 2014-01-08 14:42:37 -0800, Joshua D. Drake wrote:
On 01/08/2014 02:34 PM, Andres Freund wrote:
I don't think you've understood synchronous replication. There wouldn't
be *any* benefit to using it if it worked the way you wish since there
wouldn't be any additional guarantees. A single
CC to hackers restored.
Pavel Stehule escribió:
Dne 8.1.2014 23:17 Alvaro Herrera alvhe...@2ndquadrant.com napsal(a):
Pavel Stehule escribió:
Hello
I don't like this direction. What we can do with JSON from plpgsql?
We have plenty of JSON functions and operators in SQL, and more
Joshua D. Drake j...@commandprompt.com writes:
On 01/08/2014 01:55 PM, Tom Lane wrote:
Sync mode is about providing a guarantee that the data exists on more than
one server *before* we tell the client it's committed. If you don't need
that guarantee, you shouldn't be using sync mode. If you
On 01/08/2014 02:46 PM, Andres Freund wrote:
db0-db1 in synchronous mode
The idea is that we know that data on db0 is not written until we know for a
fact that db1 also has that data. That is great and a guarantee of data
integrity between the two nodes.
That guarantee is never there. The
On Wed, January 8, 2014 22:29, Oleg Bartunov wrote:
Attached is a new version of patch, which addresses most issues raised
by Andres.
[ nested_hstore-0.42.patch.gz ]
Building documentation fails:
openjade:hstore.sgml:1010:18:E: end tag for element A which is not open
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-08 14:42:37 -0800, Joshua D. Drake wrote:
Using the model (as I understand it) that is being discussed we have
increased our failure rate because the moment db1:down we also lose db0. The
node db0 may be up but if it isn't going to
* Andres Freund (and...@2ndquadrant.com) wrote:
That's why you should configure a second standby as another (candidate)
synchronous replica, also listed in synchronous_standby_names.
Perhaps we should stress in the docs that this is, in fact, the *only*
reasonable mode in which to run with sync
On 2014-01-08 17:56:37 -0500, Stephen Frost wrote:
* Andres Freund (and...@2ndquadrant.com) wrote:
That's why you should configure a second standby as another (candidate)
synchronous replica, also listed in synchronous_standby_names.
Perhaps we should stress in the docs that this is, in
On 01/08/2014 02:49 PM, Tom Lane wrote:
Then you don't understand the point of sync mode, and you shouldn't be
using it. The point is *exactly* to refuse to commit transactions unless
we can guarantee the data's been replicated.
I understand exactly that and I don't disagree, except in the
On 01/08/2014 01:49 PM, Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
If we really want auto-degrading sync rep, then we'd (at a minimum) need
a way to determine *from the replica* whether or not it was in degraded
mode when the master died. What good do messages to the master log do
On 2014-01-08 14:52:07 -0800, Joshua D. Drake wrote:
On 01/08/2014 02:46 PM, Andres Freund wrote:
The idea is that we know that data on db0 is not written until we know for a
fact that db1 also has that data. That is great and a guarantee of data
integrity between the two nodes.
That
* Andres Freund (and...@2ndquadrant.com) wrote:
On 2014-01-08 17:56:37 -0500, Stephen Frost wrote:
* Andres Freund (and...@2ndquadrant.com) wrote:
That's why you should configure a second standby as another (candidate)
synchronous replica, also listed in synchronous_standby_names.
Stephen,
I'm aware, my point was simply that we should state, up-front in
25.2.7.3 *and* where we document synchronous_standby_names, that it
requires at least three servers to be involved to be a workable
solution.
It's a workable solution with 2 servers. That's a low-availability,
Stephen Frost sfr...@snowman.net writes:
I'm aware, my point was simply that we should state, up-front in
25.2.7.3 *and* where we document synchronous_standby_names, that it
requires at least three servers to be involved to be a workable
solution.
It only requires that if your requirements
Josh,
* Josh Berkus (j...@agliodbs.com) wrote:
I'm aware, my point was simply that we should state, up-front in
25.2.7.3 *and* where we document synchronous_standby_names, that it
requires at least three servers to be involved to be a workable
solution.
It's a workable solution with 2
On 01/08/2014 03:18 PM, Stephen Frost wrote:
Do you really feel that a WARNING and increasing the docs to point
out that three systems are necessary, particularly under the 'high
availability' documentation and options, is a bad idea? I fail to see
how that does anything but clarify the
Josh Berkus j...@agliodbs.com writes:
HOWEVER, we've already kind of set up an indeterminate situation with
allowing sync rep groups and candidate sync rep servers. Consider this:
1. Master server A is configured with sync replica B and candidate sync
replica C
2. A rolling power/network
On Wed, Jan 8, 2014 at 2:23 PM, Joshua D. Drake j...@commandprompt.comwrote:
On 01/08/2014 01:55 PM, Tom Lane wrote:
Sync mode is about providing a guarantee that the data exists on more than
one server *before* we tell the client it's committed. If you don't need
that guarantee, you
On 01/08/2014 03:27 PM, Tom Lane wrote:
Good point, but C can't solve this for you just by logging. If C was the
first to go down, it has no way to know whether A and B committed more
transactions before dying; and it's unlikely to have logged its own crash,
either.
Sure. But if we *knew*
On Wed, Jan 8, 2014 at 2:56 PM, Stephen Frost sfr...@snowman.net wrote:
* Andres Freund (and...@2ndquadrant.com) wrote:
That's why you should configure a second standby as another (candidate)
synchronous replica, also listed in synchronous_standby_names.
Perhaps we should stress in the
Josh Berkus j...@agliodbs.com writes:
On 01/08/2014 03:27 PM, Tom Lane wrote:
What we lack, and should work on, is a way for sync mode to have M larger
than one. AFAICS, right now we'll report commit as soon as there's one
up-to-date replica, and some high-reliability cases are going to want
On 1/8/14, 6:05 PM, Tom Lane wrote:
Josh Berkusj...@agliodbs.com writes:
On 01/08/2014 03:27 PM, Tom Lane wrote:
What we lack, and should work on, is a way for sync mode to have M larger
than one. AFAICS, right now we'll report commit as soon as there's one
up-to-date replica, and some
On 12/23/13, 9:47 AM, Pavel Stehule wrote:
Has anybody looked into how hard it would be to add method notation
to postgreSQL, so that instead of calling
getString(hstorevalue, n)
we could use
hstorevalue.getString(n)
yes, I played with it some years ago. I ended early,
On 1/8/14, 1:43 AM, Heikki Linnakangas wrote:
I've wanted the cluster case for a long time. I also see the use for the RECENT
scenario, especially if we had CLUSTER CONCURRENT that let you shrink the head
of the table as needed.
I suspect the in-memory case would only be useful if it could
On 01/09/2014 04:42 AM, Alvaro Herrera wrote:
If there's a NULL element when expanding an object, the whole thing
expands to empty. For example, if no AUTHORIZATION
clause is specified, authorization element is still there, but the
authorization_role element within it is NULL, and so the
On Wed, Jan 8, 2014 at 8:43 PM, Joshua D. Drake j...@commandprompt.com
wrote:
On 01/08/2014 02:38 PM, Josh Berkus wrote:
On 01/08/2014 02:04 PM, Peter Eisentraut wrote:
Anyone else?
Or you'll have to deal with me again?
I vote for Peter.
+1
+1
--
Fabrízio de Royes Mello
Is there a reason why the programs in src/bin/scripts all put newlines
at the end of the SQL commands they generate? This produces useless
empty lines in the server log (and client output, if selected).
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Craig Ringer escribió:
Instead, can't we use your already proposed subclause structure?
{authorization:{authorization_role:some guy,
output:AUTHORIZATION %i{authorization_role}},
if_not_exists: {output: IF NOT EXISTS},
name:some schema,
output:CREATE SCHEMA
Peter Eisentraut pete...@gmx.net writes:
pg_upgrade creates a script analyze_new_cluster.{sh|bat} that runs
vacuumdb --analyze-only in three stages with different statistics target
settings to get a fresh cluster analyzed faster. I think this behavior
is also useful for clusters or databases
Michael Paquier wrote:
A possible input for a test that users could provide would be something like
that:
# Node information for tests
nodes {
{node1, postgresql.conf params, recovery.conf params}
{node2, postgresql.conf params, recovery.conf params, slave of node1}
}
# Run test
On Wed, Jan 8, 2014 at 6:15 PM, Josh Berkus j...@agliodbs.com wrote:
Stephen,
I'm aware, my point was simply that we should state, up-front in
25.2.7.3 *and* where we document synchronous_standby_names, that it
requires at least three servers to be involved to be a workable
solution.
It's
A patch with updated documentation is attached.
On 01/02/2014 04:08 AM, Robert Haas wrote:
I'm wondering whether the time should be stored inside the PlannedStmt
node instead of passing it around separately. One possible problem
with the way you've done things here is that, in the case of a
On Thu, Jan 9, 2014 at 12:34 PM, Mark Dilger markdil...@yahoo.com wrote:
Michael Paquier wrote:
A possible input for a test that users could provide would be something
like that:
# Node information for tests
nodes {
{node1, postgresql.conf params, recovery.conf params}
{node2,
92 matches
Mail list logo