$ pg_dump -F d -j 4 -f foo regression
pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver] query was:
SET TRANSACTION SNAPSHOT '2130-1'
pg_dump: [archiver (db)] query failed: pg_dump: [archiver (db)] query failed:
pg_dump: [archiver (db)] query failed: $
postmaster log shows:
On 2014-03-05 12:07:43 -0500, Tom Lane wrote:
$ pg_dump -F d -j 4 -f foo regression
pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver] query
was: SET TRANSACTION SNAPSHOT '2130-1'
pg_dump: [archiver (db)] query failed: pg_dump: [archiver (db)] query failed:
pg_dump:
On March 5, 2014 6:07:43 PM CET, Tom Lane t...@sss.pgh.pa.us wrote:
$ pg_dump -F d -j 4 -f foo regression
pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver]
query was: SET TRANSACTION SNAPSHOT '2130-1'
pg_dump: [archiver (db)] query failed: pg_dump: [archiver (db)] query
On 2014-03-05 18:39:52 +0100, Andres Freund wrote:
On March 5, 2014 6:07:43 PM CET, Tom Lane t...@sss.pgh.pa.us wrote:
$ pg_dump -F d -j 4 -f foo regression
pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver]
query was: SET TRANSACTION SNAPSHOT '2130-1'
pg_dump: [archiver
I have committed this patch, after a good deal of review and testing,
including on Windows.
While I did some editorializing, there is more work to do, particularly
on improving comments and messages, but I wanted to get this in as time
is getting very short, and it's important that we not
On 12/09/2012 04:05 AM, Bruce Momjian wrote:
FYI, I will be posting pg_upgrade performance numbers using Unix
processes. I will try to get the Windows code working but will also
need help.
I'm interested ... or at least willing to help ... re the Windows side.
Let me know if I can be of any
On 01/21/2013 06:02 PM, Craig Ringer wrote:
On 12/09/2012 04:05 AM, Bruce Momjian wrote:
FYI, I will be posting pg_upgrade performance numbers using Unix
processes. I will try to get the Windows code working but will also
need help.
I'm interested ... or at least willing to help ... re the
Hi,
On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland j...@mcknight.de
wrote:
Attached is a rebased version of the parallel pg_dump patch.
On 12/08/2012 11:01 AM, Andres Freund wrote:
Hi,
On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland j...@mcknight.de
wrote:
Attached is a rebased
On Sat, Dec 8, 2012 at 11:13:30AM -0500, Andrew Dunstan wrote:
On 12/08/2012 11:01 AM, Andres Freund wrote:
Hi,
On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05
On Sat, Dec 8, 2012 at 3:05 PM, Bruce Momjian br...@momjian.us wrote:
On Sat, Dec 8, 2012 at 11:13:30AM -0500, Andrew Dunstan wrote:
I am working on it when I get a chance, but keep getting hammered.
I'd love somebody else to review it too.
FYI, I will be posting pg_upgrade performance
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland j...@mcknight.de
wrote:
Attached is a rebased version of the parallel pg_dump patch.
Attached is another rebased version for the current commitfest.
On 04/05/2012 12:32 PM, Joachim Wieland wrote:
So here's a pg_dump benchmark from a real world database as requested
earlier. This is a ~750 GB large 9.0.6 database, and the backup has
been done over the internal network from a different machine. Both
machines run Linux.
I am attaching a
On Tue, Apr 3, 2012 at 9:26 AM, Andrew Dunstan and...@dunslane.net wrote:
First, either the creation of the destination directory needs to be delayed
until all the sanity checks have passed and we're sure we're actually going
to write something there, or it needs to be removed if we error exit
On 04/04/2012 05:03 AM, Joachim Wieland wrote:
Second, all the PrintStatus traces are annoying and need to be removed, or
perhaps better only output in debugging mode (using ahlog() instead of just
printf())
Sure, PrintStatus is just there for now to see what's going on. My
plan was to remove
Excerpts from Joachim Wieland's message of miƩ abr 04 15:43:53 -0300 2012:
On Wed, Apr 4, 2012 at 8:27 AM, Andrew Dunstan and...@dunslane.net wrote:
Sure, PrintStatus is just there for now to see what's going on. My
plan was to remove it entirely in the final patch.
We need that final
I haven't finished reviewing this yet - but there are some things that
need to be fixed.
First, either the creation of the destination directory needs to be
delayed until all the sanity checks have passed and we're sure we're
actually going to write something there, or it needs to be removed
On Mar 30, 2010, at 8:15 AM, Stefan Kaltenbrunner wrote:
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been
Jeff wrote:
On Mar 30, 2010, at 8:15 AM, Stefan Kaltenbrunner wrote:
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically
Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
On 3/29/10 7:46 AM, Joachim Wieland wrote:
I actually assume that whenever people are interested
in a very fast dump, it is because they are doing some maintenance
task (like migrating to a different server) that involves pg_dump. In
these
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficiency in mind.
--
Sent via pgsql-hackers mailing list
On Tue, 30 Mar 2010 13:01:54 +0200, Peter Eisentraut pete...@gmx.net
wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficiency in mind.
It's not pg_dump that is
People have been talking about a parallel version of pg_dump a few
times already. I have been working on some proof-of-concept code for
this feature every now and then and I am planning to contribute this
for 9.1.
There are two main issues with a parallel version of pg_dump:
The first one is
On Mon, Mar 29, 2010 at 04:46:48PM +0200, Joachim Wieland wrote:
People have been talking about a parallel version of pg_dump a few
times already. I have been working on some proof-of-concept code for
this feature every now and then and I am planning to contribute this
for 9.1.
There are
On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland j...@mcknight.de wrote:
- There are ideas on how to solve the issue with the consistent
snapshot but in the end you can always solve it by stopping your
application(s). I actually assume that whenever people are interested
in a very fast dump,
Robert Haas wrote:
On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland j...@mcknight.de wrote:
[...]
- Regarding the output of pg_dump I am proposing two solutions. The
first one is to introduce a new archive type directory where each
table and each blob is a file in a directory, similar to the
On Mon, Mar 29, 2010 at 1:16 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
Robert Haas wrote:
On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland j...@mcknight.de wrote:
[...]
- Regarding the output of pg_dump I am proposing two solutions. The
first one is to introduce a new
On 3/29/10 7:46 AM, Joachim Wieland wrote:
I actually assume that whenever people are interested
in a very fast dump, it is because they are doing some maintenance
task (like migrating to a different server) that involves pg_dump. In
these cases, they would stop their system anyway.
Actually,
Josh Berkus j...@agliodbs.com writes:
On 3/29/10 7:46 AM, Joachim Wieland wrote:
I actually assume that whenever people are interested
in a very fast dump, it is because they are doing some maintenance
task (like migrating to a different server) that involves pg_dump. In
these cases, they
On Mon, Mar 29, 2010 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Josh Berkus j...@agliodbs.com writes:
On 3/29/10 7:46 AM, Joachim Wieland wrote:
I actually assume that whenever people are interested
in a very fast dump, it is because they are doing some maintenance
task (like migrating to
Robert Haas wrote:
It's completely possible that you could want to clone a server for dev
and have more CPU and I/O bandwidth available than can be efficiently
used by a non-parallel pg_dump. But certainly what Joachim is talking
about will be a good start. I think there is merit to the
32 matches
Mail list logo