On Thu, Sep 1, 2011 at 21:59, Tomas Vondra t...@fuzzy.cz wrote:
I've prepared a significantly simplified version of the patch. The two
main changes are
(a) it does not update the pg_stat_bgwriter anymore, it just prints an
info to the server log
(b) a new GUC is not required, it's driven by
Robert Haas robertmh...@gmail.com writes:
I'm also wondering if providing some shell script examples of a
fault-tolerant script to handle archiving would be useful.
I think it would.
My usual advice is to avoid having to write one if possible, because
it's more complex than it looks. What
Greg Sabino Mullane g...@turnstep.com writes:
It's off topic. But I think custom format would require a major mangling
to be able to handle a complete cluster. This isn't just a simple matter
of programming, IMNSHO.
Oh, I meant just having it create separate custom format files for each
On 2 Září 2011, 9:47, Magnus Hagander wrote:
On Thu, Sep 1, 2011 at 21:59, Tomas Vondra t...@fuzzy.cz wrote:
I've prepared a significantly simplified version of the patch. The two
main changes are
(a) it does not update the pg_stat_bgwriter anymore, it just prints an
info to the server log
The fifth edition of the Italian PostgreSQL Day (PGDay.IT 2011) will be
held on November 25 in Prato.
It will cover topics for users, developers and contributors to
PostgreSQL, as well as decision and policy makers. For more information
about the conference, please visit the website at
On Fri, Sep 2, 2011 at 11:01, Tomas Vondra t...@fuzzy.cz wrote:
On 2 Září 2011, 9:47, Magnus Hagander wrote:
On Thu, Sep 1, 2011 at 21:59, Tomas Vondra t...@fuzzy.cz wrote:
I've prepared a significantly simplified version of the patch. The two
main changes are
(a) it does not update the
hello …
i have been thinking about this issue for quite a while ...
given your idea i am not sure how this can work at all.
consider:
begin;
insert 1
insert 2
commit
assume this ends up in the same node,
now you split it into two …
1 and 2 will have exactly the
From what I can tell, everytime I start a postmaster on HEAD (at least
when i've set wal_level=archive, and max_wal_senders 0), I get the
message:
LOG: terminating all walsender processes to force cascaded standby(s)
to update timeline and reconnect
in the startup log.
This is long before I've
On Fri, 2011-09-02 at 14:01 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
hello …
i have been thinking about this issue for quite a while ...
given your idea i am not sure how this can work at all.
consider:
begin;
insert 1
insert 2
commit
assume this ends
hello …
the goal of the entire proxy thing is to make the right query go to the right
node / nodes.
we determine this by using a partitioning function and so on …
currently PL/proxy has only a handful of commands - one is RUN ON … which tells
us where to put things.
assume you issue a select …
On Fri, 2011-09-02 at 14:51 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
hello …
the goal of the entire proxy thing is to make the right query go to the right
node / nodes.
we determine this by using a partitioning function and so on …
currently PL/proxy has only a handful of commands -
On Sep 2, 2011, at 2:59 PM, Hannu Krosing wrote:
On Fri, 2011-09-02 at 14:51 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
hello …
the goal of the entire proxy thing is to make the right query go to the
right node / nodes.
we determine this by using a partitioning function and so on …
On 2 Září 2011, 12:45, Magnus Hagander wrote:
On Fri, Sep 2, 2011 at 11:01, Tomas Vondra t...@fuzzy.cz wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I guess.
I guess if it's at a DEBUG level it won't
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I
guess.
Changing the log level is not the appropriate solution. Make it a
configuration parameter.
--
Peter Eisentraut pete...@gmx.net writes:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I
guess.
Changing the log level is not the appropriate solution.
Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
Robert Haas robertmh...@gmail.com writes:
I'm also wondering if providing some shell script examples of a
fault-tolerant script to handle archiving would be useful.
I think it would.
My usual advice is to avoid having to write one if possible,
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
+# contrib/pg_upgrade/test.sh
+#
+# Test driver for pg_upgrade. Initializes a new database cluster,
+# runs the regression tests (to put in some data), runs pg_dumpall,
On 09/02/2011 10:36 AM, Peter Eisentraut wrote:
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
+# contrib/pg_upgrade/test.sh
+#
+# Test driver for pg_upgrade. Initializes a new database cluster,
+# runs the regression tests
Kevin Grittner kevin.gritt...@wicourts.gov writes:
In a green field I might argue for having an archvie_directory GUC
instead of archive_command. As it stands, it might be a really good
I would think we then would need both. archive_command with parameters
offers both.
idea to provide a
On 2 Září 2011, 15:44, Peter Eisentraut wrote:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I
guess.
Changing the log level is not the appropriate
On 09/01/2011 09:40 PM, Greg Sabino Mullane wrote:
It's off topic. But I think custom format would require a major mangling
to be able to handle a complete cluster. This isn't just a simple matter
of programming, IMNSHO.
Oh, I meant just having it create separate custom format files for each
On Sep 2, 2011 5:02 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 2 Září 2011, 15:44, Peter Eisentraut wrote:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is
On 2 Září 2011, 16:13, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I
guess.
Changing the
On 2 Září 2011, 17:08, Magnus Hagander wrote:
On Sep 2, 2011 5:02 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 2 Září 2011, 15:44, Peter Eisentraut wrote:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG?
.Tomas Vondra t...@fuzzy.cz wrote:
On 2 Zá*í 2011, 15:44, Peter Eisentraut wrote:
Changing the log level is not the appropriate solution. Make it
a configuration parameter.
Why is it inappropriate solution? There's a log_checkpoints GUC
that drives it and you can either get basic info
On Sep 2, 2011 5:18 PM, Jaime Casanova ja...@2ndquadrant.com wrote:
On Fri, Sep 2, 2011 at 10:10 AM, Tomas Vondra t...@fuzzy.cz wrote:
My 'ideal' solution would be either to add another GUC (to turn this
on/off) or allow log_checkpoints to have three values
log_checkpoints = {off,
On Fri, Sep 2, 2011 at 10:10 AM, Tomas Vondra t...@fuzzy.cz wrote:
My 'ideal' solution would be either to add another GUC (to turn this
on/off) or allow log_checkpoints to have three values
log_checkpoints = {off, normal, detailed}
+1 on make it an enum but i prefer values {on, off,
I've committed this, but I still think it would be helpful to revise
that comment. The turn boosted up is not very precise or
informative. Could you submit a separate, comment-only patch to
improve this?
I tried to put more detailed explanation about the logic of do { ... } while
loop of
Peter Eisentraut pete...@gmx.net writes:
On lör, 2011-08-27 at 13:32 -0400, Tom Lane wrote:
The larger problem is that if a subquery didn't get flattened, it's
often because it's got LIMIT, or GROUP BY, or some similar clause that
makes it highly suspect whether the statistics available for
Excerpts from Bruce Momjian's message of vie sep 02 12:20:50 -0300 2011:
Wow, that is interesting. So the problem is the inclusion of
replication/walsender.h in xlog.h. Hard to see how that could cause the
cube regression tests to fail, but of course, it is.
Hmm, so you included walsender.h
On Fri, Sep 2, 2011 at 10:34 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
Robert Haas robertmh...@gmail.com writes:
I'm also wondering if providing some shell script examples of a
fault-tolerant script to handle archiving would be useful.
On Thu, Sep 1, 2011 at 3:59 PM, Tomas Vondra t...@fuzzy.cz wrote:
I've prepared a significantly simplified version of the patch. The two
main changes are
(a) it does not update the pg_stat_bgwriter anymore, it just prints an
info to the server log
(b) a new GUC is not required, it's driven
On 09/02/2011 01:00 PM, Robert Haas wrote:
On Fri, Sep 2, 2011 at 10:34 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Dimitri Fontainedimi...@2ndquadrant.fr wrote:
Robert Haasrobertmh...@gmail.com writes:
I'm also wondering if providing some shell script examples of a
Robert Haas robertmh...@gmail.com wrote:
maybe instead of shipping a more complicated script we should be
trying to eliminate (or at least reduce) the need for a more
complicated script.
That was the intent of my pg_archiveto suggestion. I'll amend it
(based on other comments) to allow for
Andrew Dunstan and...@dunslane.net wrote:
For example, you might want to copy the archives to more than one
place for safety reasons.
We've never felt that the way to do that was to put the logic for it
in the archive script -- we archive to a local directory and set up
rsync tasks on cron
On Fri, Sep 2, 2011 at 19:13, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Andrew Dunstan and...@dunslane.net wrote:
For example, you might want to copy the archives to more than one
place for safety reasons.
We've never felt that the way to do that was to put the logic for it
in the
Attached patch implements a low watermark wal location in the
walsender shmem array. Setting this value in a walsender prevents
transaction log removal prior to this point - similar to how
wal_keep_segments work, except with an absolute number rather than
relative. For now, this is set when
Peter Eisentraut wrote:
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
+# contrib/pg_upgrade/test.sh
+#
+# Test driver for pg_upgrade. Initializes a new database cluster,
+# runs the regression tests (to put
Andrew Dunstan wrote:
On 09/02/2011 10:36 AM, Peter Eisentraut wrote:
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
+# contrib/pg_upgrade/test.sh
+#
+# Test driver for pg_upgrade. Initializes a new database
On Fri, Sep 2, 2011 at 12:52 PM, Magnus Hagander mag...@hagander.net wrote:
Attached patch implements a low watermark wal location in the
walsender shmem array. Setting this value in a walsender prevents
transaction log removal prior to this point - similar to how
wal_keep_segments work,
On Fri, Sep 2, 2011 at 20:12, Jaime Casanova ja...@2ndquadrant.com wrote:
On Fri, Sep 2, 2011 at 12:52 PM, Magnus Hagander mag...@hagander.net wrote:
Attached patch implements a low watermark wal location in the
walsender shmem array. Setting this value in a walsender prevents
transaction log
On 09/02/2011 01:55 PM, Bruce Momjian wrote:
Andrew Dunstan wrote:
On 09/02/2011 10:36 AM, Peter Eisentraut wrote:
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
+# contrib/pg_upgrade/test.sh
+#
+# Test driver for
On 08/24/2011 05:01 PM, Josh Berkus wrote:
For those who are (like my clients :-) ) anxious to get their hands on
this immediately, a backport patch is also attached which applies to 9.0
sources, and applies with offsets to 8.4 sources.
FWIW, I have immediate use for this in creating cut-down
On fre, 2011-09-02 at 17:02 +0200, Tomas Vondra wrote:
On 2 Září 2011, 15:44, Peter Eisentraut wrote:
On fre, 2011-09-02 at 11:01 +0200, Tomas Vondra wrote:
What about logging it with a lower level, e.g. NOTICE instead of the
current LOG? If that's not a solution then a new GUC is needed I
Peter Eisentraut pete...@gmx.net writes:
On fre, 2011-09-02 at 17:02 +0200, Tomas Vondra wrote:
Why is it inappropriate solution? There's a log_checkpoints GUC that
drives it and you can either get basic info (summary of the checkpoint) or
detailed log (with a lower log level).
If a user is
On tis, 2011-08-30 at 15:31 -0400, Tom Lane wrote:
While not wishing to deny that this can be a problem, I think you're
overstating this aspect:
Now if this had been, say, plpython, which is also developed closely
together with the backend, but is probably shipped in a separate binary
On 2 Září 2011, 20:46, Peter Eisentraut wrote:
On fre, 2011-09-02 at 17:13 +0200, Tomas Vondra wrote:
Sure, but I think the effort not to have a zillion of GUC makes sense.
Well, I'll be the first to agree that reducing complexity in
configuration and tuning settings is worth aiming for.
Bruce Momjian br...@momjian.us writes:
Andrew Dunstan wrote:
In any case, it would be good to get rid of the limitation if possible.
Then we could look at creating an automated test that we could use in
the buildfarm.
Well, the idea of using the catalog version was that it is something we
On Fri, Sep 02, 2011 at 09:54:07PM +0300, Peter Eisentraut wrote:
On ons, 2011-08-31 at 13:12 -0500, Ross J. Reedstrom wrote:
Hmm, this thread seems to have petered out without a conclusion. Just
wanted to comment that there _are_ non-password storage uses for these
digests: I use them in a
On 2 Září 2011, 20:48, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
On fre, 2011-09-02 at 17:02 +0200, Tomas Vondra wrote:
Why is it inappropriate solution? There's a log_checkpoints GUC that
drives it and you can either get basic info (summary of the checkpoint)
or
detailed log
Peter Eisentraut pete...@gmx.net writes:
On tis, 2011-08-30 at 15:31 -0400, Tom Lane wrote:
This is not possible at least in the Red Hat world, because all the
subpackages have exact-version-and-release dependencies tying them
together. That's distro policy not just my whim, and I'd expect
OK, this seems to have some pluses and no negative comments, so it seems
worth going forward. Do we want an equivalent pg_restore option?
I'm not sure it's *as* important for pg_restore, since I can easily use
a manifest to avoid restoring data for a single table. So I guess it's
a question
Hackers,
Who is our CommitFest Master for the next CF? It's only around 2 weeks
away.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
Tomas Vondra t...@fuzzy.cz writes:
On 2 ZáÅà 2011, 20:48, Tom Lane wrote:
Yeah, if we're going to have this at all, some form of GUC control over
it seems necessary. I'm still disturbed by the verbosity of the
proposed output though. Couldn't we collapse the information into a
single log
Robert Haas robertmh...@gmail.com writes:
Another thought I have here is to wonder whether we should change
something on the server side so that we don't NEED such a complicated
archive_command. I mean, copying a file to a directory somewhere is
not fundamentally a complex operation. Nor is
On Fri, Sep 2, 2011 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Another thought I have here is to wonder whether we should change
something on the server side so that we don't NEED such a complicated
archive_command. I mean, copying a file to a
Josh Berkus j...@agliodbs.com wrote:
Who is our CommitFest Master for the next CF? It's only around 2
weeks away.
I have volunteered for that, although I can step aside if someone
else wants to do it. (I'm assuming nobody does, but you never
know.) If nobody else claims it within the next
Tom Lane t...@sss.pgh.pa.us wrote:
I think the notion that we should get rid of archive_command in
favor of something more hard-wired is sheer lunacy.
It's a good thing nobody proposed that.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Robert Haas robertmh...@gmail.com writes:
I wasn't really proposing to get rid of it, but I do wonder if there
are some configuration parameters we could add somewhere that would
make common cases easier without making really complex things
impossible.
I think the solution to that problem is
On 2 Září 2011, 21:23, Tom Lane wrote:
Tomas Vondra t...@fuzzy.cz writes:
On 2 Z?? 2011, 20:48, Tom Lane wrote:
Yeah, if we're going to have this at all, some form of GUC control over
it seems necessary. I'm still disturbed by the verbosity of the
proposed output though. Couldn't we
On 09/02/2011 03:04 PM, Tom Lane wrote:
Bruce Momjianbr...@momjian.us writes:
Andrew Dunstan wrote:
In any case, it would be good to get rid of the limitation if possible.
Then we could look at creating an automated test that we could use in
the buildfarm.
Well, the idea of using the
Tomas Vondra t...@fuzzy.cz writes:
On 2 ZáÅà 2011, 21:23, Tom Lane wrote:
Well, to be blunt, putting stuff into the postmaster log is entirely the
wrong way to satify a requirement like that. If you want to expose
progress information, it should be exposed via something dynamic like
Andrew Dunstan wrote:
On 09/02/2011 01:55 PM, Bruce Momjian wrote:
Andrew Dunstan wrote:
On 09/02/2011 10:36 AM, Peter Eisentraut wrote:
On tor, 2011-09-01 at 18:55 -0400, Bruce Momjian wrote:
Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
+#
Kevin Grittner kevin.gritt...@wicourts.gov writes:
(1) We're talking about a new /bin executable to do this which
could be referenced in an archive_command string or run from a
script called by archive_command, right?
That, or an internal implementation. That would be a function in the
On Fri, Sep 02, 2011 at 02:05:45PM -0500, k...@rice.edu wrote:
On Fri, Sep 02, 2011 at 09:54:07PM +0300, Peter Eisentraut wrote:
On ons, 2011-08-31 at 13:12 -0500, Ross J. Reedstrom wrote:
Hmm, this thread seems to have petered out without a conclusion. Just
wanted to comment that there
Kevin,
I have volunteered for that, although I can step aside if someone
else wants to do it. (I'm assuming nobody does, but you never
know.) If nobody else claims it within the next couple days, I'll
post a call for reviewers.
Don't look at me.
I haven't deciphered Robert's code enough
On Thu, Sep 1, 2011 at 9:25 AM, Robert Haas robertmh...@gmail.com wrote:
Committed, with some changes. I used the new --if-exists option for
dropdb rather than doing it as you had it here; I assume this may have
been the motivation for that patch.
Thanks, and that was indeed one reason I
On fre, 2011-09-02 at 16:00 -0400, Andrew Dunstan wrote:
Basically the idea is to stash away build and data dirs (after we've run
regression, PL and contrib testing) for stable branches (via a command
line option) and then test upgrading them. A trial run on the first part
is currently
On fre, 2011-09-02 at 15:04 -0400, Tom Lane wrote:
IMO there's next to no value in testing that scenario anyway, since
nobody would ever use it in the field. What *would* be of value is
testing upgrades from previous release versions. Probably that will
take some work in the buildfarm
On 09/02/2011 06:37 PM, Peter Eisentraut wrote:
On fre, 2011-09-02 at 16:00 -0400, Andrew Dunstan wrote:
Basically the idea is to stash away build and data dirs (after we've run
regression, PL and contrib testing) for stable branches (via a command
line option) and then test upgrading them. A
Andrew Dunstan and...@dunslane.net writes:
On 09/02/2011 06:37 PM, Peter Eisentraut wrote:
It won't work, unless you have a solution for fixing the paths of the
shared library modules used by the regression tests.
Well, we could drop those functions and not run tests that require them.
Or
On 09/02/2011 07:49 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 09/02/2011 06:37 PM, Peter Eisentraut wrote:
It won't work, unless you have a solution for fixing the paths of the
shared library modules used by the regression tests.
Well, we could drop those functions
72 matches
Mail list logo