On 2014-03-05 18:39:52 +0100, Andres Freund wrote:
> On March 5, 2014 6:07:43 PM CET, Tom Lane wrote:
> >$ pg_dump -F d -j 4 -f foo regression
> >pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver]
> >query was: SET TRANSACTION SNAPSHOT '2130-1'
> >pg_dump: [archiver (db)] quer
On March 5, 2014 6:07:43 PM CET, Tom Lane wrote:
>$ pg_dump -F d -j 4 -f foo regression
>pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver]
>query was: SET TRANSACTION SNAPSHOT '2130-1'
>pg_dump: [archiver (db)] query failed: pg_dump: [archiver (db)] query
>failed: pg_dump: [a
On 2014-03-05 12:07:43 -0500, Tom Lane wrote:
> $ pg_dump -F d -j 4 -f foo regression
> pg_dump: [archiver (db)] query failed: pg_dump: [parallel archiver] query
> was: SET TRANSACTION SNAPSHOT '2130-1'
> pg_dump: [archiver (db)] query failed: pg_dump: [archiver (db)] query failed:
> pg_dump:
On 01/21/2013 06:02 PM, Craig Ringer wrote:
> On 12/09/2012 04:05 AM, Bruce Momjian wrote:
>> FYI, I will be posting pg_upgrade performance numbers using Unix
>> processes. I will try to get the Windows code working but will also
>> need help.
> I'm interested ... or at least willing to help ... r
On 12/09/2012 04:05 AM, Bruce Momjian wrote:
>
> FYI, I will be posting pg_upgrade performance numbers using Unix
> processes. I will try to get the Windows code working but will also
> need help.
I'm interested ... or at least willing to help ... re the Windows side.
Let me know if I can be of an
On Sat, Dec 8, 2012 at 3:05 PM, Bruce Momjian wrote:
> On Sat, Dec 8, 2012 at 11:13:30AM -0500, Andrew Dunstan wrote:
> > I am working on it when I get a chance, but keep getting hammered.
> > I'd love somebody else to review it too.
>
> FYI, I will be posting pg_upgrade performance numbers usin
On Sat, Dec 8, 2012 at 11:13:30AM -0500, Andrew Dunstan wrote:
>
> On 12/08/2012 11:01 AM, Andres Freund wrote:
> >Hi,
> >
> >On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
> >>On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
> >>>On 09/17/2012 10:01 PM, Joachim Wieland wrote:
> On Mon, Jun
On 12/08/2012 11:01 AM, Andres Freund wrote:
Hi,
On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland
wrote:
Attached is a rebased version of the pa
Hi,
On 2012-10-15 17:13:10 -0400, Andrew Dunstan wrote:
>
> On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
> >
> >On 09/17/2012 10:01 PM, Joachim Wieland wrote:
> >>On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland
> >>wrote:
> >>>Attached is a rebased version of the parallel pg_dump patch.
> >>At
On 10/13/2012 10:46 PM, Andrew Dunstan wrote:
On 09/17/2012 10:01 PM, Joachim Wieland wrote:
On Mon, Jun 18, 2012 at 10:05 PM, Joachim Wieland
wrote:
Attached is a rebased version of the parallel pg_dump patch.
Attached is another rebased version for the current commitfest.
These did not
On 04/05/2012 12:32 PM, Joachim Wieland wrote:
> So here's a pg_dump benchmark from a real world database as requested
> earlier. This is a ~750 GB large 9.0.6 database, and the backup has
> been done over the internal network from a different machine. Both
> machines run Linux.
>
> I am attaching
Excerpts from Joachim Wieland's message of miƩ abr 04 15:43:53 -0300 2012:
> On Wed, Apr 4, 2012 at 8:27 AM, Andrew Dunstan wrote:
> >> Sure, PrintStatus is just there for now to see what's going on. My
> >> plan was to remove it entirely in the final patch.
> >
> > We need that final patch NOW,
On 04/04/2012 05:03 AM, Joachim Wieland wrote:
Second, all the PrintStatus traces are annoying and need to be removed, or
perhaps better only output in debugging mode (using ahlog() instead of just
printf())
Sure, PrintStatus is just there for now to see what's going on. My
plan was to remove
On Tue, Apr 3, 2012 at 9:26 AM, Andrew Dunstan wrote:
> First, either the creation of the destination directory needs to be delayed
> until all the sanity checks have passed and we're sure we're actually going
> to write something there, or it needs to be removed if we error exit before
> anything
Jeff wrote:
On Mar 30, 2010, at 8:15 AM, Stefan Kaltenbrunner wrote:
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically
On Mar 30, 2010, at 8:15 AM, Stefan Kaltenbrunner wrote:
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been develope
Peter Eisentraut wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficiency in mind.
It's not pg_dump that is t
On Tue, 30 Mar 2010 13:01:54 +0200, Peter Eisentraut
wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficien
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
> on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed with CPU efficiency in mind.
--
Sent via pgsql-hackers mailing list (pgsql-hack
Tom Lane wrote:
Josh Berkus writes:
On 3/29/10 7:46 AM, Joachim Wieland wrote:
I actually assume that whenever people are interested
in a very fast dump, it is because they are doing some maintenance
task (like migrating to a different server) that involves pg_dump. In
these cases, they would
Robert Haas wrote:
It's completely possible that you could want to clone a server for dev
and have more CPU and I/O bandwidth available than can be efficiently
used by a non-parallel pg_dump. But certainly what Joachim is talking
about will be a good start. I think there is merit to the
sync
On Mon, Mar 29, 2010 at 4:11 PM, Tom Lane wrote:
> Josh Berkus writes:
>> On 3/29/10 7:46 AM, Joachim Wieland wrote:
>>> I actually assume that whenever people are interested
>>> in a very fast dump, it is because they are doing some maintenance
>>> task (like migrating to a different server) tha
Josh Berkus writes:
> On 3/29/10 7:46 AM, Joachim Wieland wrote:
>> I actually assume that whenever people are interested
>> in a very fast dump, it is because they are doing some maintenance
>> task (like migrating to a different server) that involves pg_dump. In
>> these cases, they would stop t
On 3/29/10 7:46 AM, Joachim Wieland wrote:
> I actually assume that whenever people are interested
> in a very fast dump, it is because they are doing some maintenance
> task (like migrating to a different server) that involves pg_dump. In
> these cases, they would stop their system anyway.
Actual
On Mon, Mar 29, 2010 at 1:16 PM, Stefan Kaltenbrunner
wrote:
> Robert Haas wrote:
>>
>> On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland wrote:
>
> [...]
>>>
>>> - Regarding the output of pg_dump I am proposing two solutions. The
>>> first one is to introduce a new archive type "directory" where
Robert Haas wrote:
On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland wrote:
[...]
- Regarding the output of pg_dump I am proposing two solutions. The
first one is to introduce a new archive type "directory" where each
table and each blob is a file in a directory, similar to the
experimental "f
On Mon, Mar 29, 2010 at 10:46 AM, Joachim Wieland wrote:
> - There are ideas on how to solve the issue with the consistent
> snapshot but in the end you can always solve it by stopping your
> application(s). I actually assume that whenever people are interested
> in a very fast dump, it is because
On Mon, Mar 29, 2010 at 04:46:48PM +0200, Joachim Wieland wrote:
> People have been talking about a parallel version of pg_dump a few
> times already. I have been working on some proof-of-concept code for
> this feature every now and then and I am planning to contribute this
> for 9.1.
>
> There a
28 matches
Mail list logo