Re: [HACKERS] pg_dump issues

2017-07-26 Thread Дмитрий Воронин
Hello, 

25.07.2017, 11:17, "Victor Drobny" :
>
> Hello,
>
> Do you expect to have some flag like '--rename=test->test2'?

Yes, I do. 

> Will dump with test replaced by test2(of course only in related places)
> be valid dump in this case?

Yes, it will.

> What is the possible scenario for the renaming option? Is it doing to be
> dumping of the one schema only?
> Or it could be dump of database? In this case pg_dump should also
> support multiple rules for renaming.

pg_dump scans dumped objects and rename them by rules, that could be set by 
pg_dump argument line options.
As I now, pg_dump and pg_restore use the same functions, so, renaming mechanism 
can be integrated in pg_restore too.
pg_restore will scan dumped objects and rename them by rules.

In future, rules could be applied on all database objects.

>
> Thank you for attention!
>
> --
> --
> Victor Drobny
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company

-- 
Best regards, Dmitry Voronin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2017-07-25 Thread Victor Drobny

We can't create any schema dump with another (user defined) name. E.g.
we dump schema test and we want to save it's dump with test2 name in
any format. Those refers to databases dump.


Hello,

Do you expect to have some flag like '--rename=test->test2'?
Will dump with test replaced by test2(of course only in related places) 
be valid dump in this case?
What is the possible scenario for the renaming option? Is it doing to be 
dumping of the one schema only?
Or it could be dump of database? In this case pg_dump should also 
support multiple rules for renaming.


Thank you for attention!

--
--
Victor Drobny
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2017-06-05 Thread Craig Ringer
On 6 June 2017 at 11:37, Дмитрий Воронин  wrote:
> Hello,
>
> We are working on one project with postgres as engeneer.
>
> Bellow is list of inconveniences that we are having with postgresql. We
> would like to solve them as possible.
>
> We can't create any schema dump with another (user defined) name. E.g. we
> dump schema test and we want to save it's dump with test2 name in any
> format. Those refers to databases dump.

This is a pretty common request. I expect a patch to add a --transform
or --rename option to pg_dump (or maybe pg_restore) might be accepted.
I suggest posting a detailed design for how you plan to do it and
asking for feedback before proceeding to implement it. You should
search the mailing list for past discussions and ideas too.

Otherwise, consulting outfits can do this sort of thing for you; see
https://www.postgresql.org/support/professional_support/ . (Note, I
work for one of them).

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump issues

2017-06-05 Thread Дмитрий Воронин
Hello,We are working on one project with postgres as engeneer.Bellow is list of inconveniences that we are having with postgresql. We would like to solve them as possible.We can't create any schema dump with another (user defined) name. E.g. we dump schema test and we want to save it's dump with test2 name in any format. Those refers to databases dump.So, no one mechanisms to copy one schema to second one or to make aliases for any database object.How can we solve them?Thank you.



Re: [HACKERS] pg_dump issues

2011-10-03 Thread Florian Pflug
On Oct2, 2011, at 23:15 , Joe Abbate wrote:
 I'm
 somewhat surprised there appears to be no ability to lock a database
 exclusively for something like pg_dump/pg_restore, so you're not
 surprised by concurrent activity against the catalogs.  I'm guessing the
 assumption is that MVCC will take care of that?

I think the hope is more that it will, one day. Currently, postgres internally
accesses the catalog with SnapshotNow, not with a MVCC snapshot. This is
necessary to ensure, for example, that rows inserted into a table also get
inserted into a newly created index. This wouldn't affects pg_dump if it only
access the catalog via SQL, but it doesn't. pg_dump also depends on some 
server-side
functions to do its work, and since these functions in turn use 
SnapshotNow-based
internal backend functions, pg_dump essentially uses a mix of SnapshotNow and
its transaction's MVCC snapshot.

There has been talk about reducing the use of of SnapshotNow for catalog access,
but AFAIK there's no concrete proposal, and certainly no patch, available.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-03 Thread Robert Haas
On Sat, Oct 1, 2011 at 9:46 PM, Andrew Dunstan and...@dunslane.net wrote:
 How would that help? This isn't a lock failure.

It is, rather, a failure to lock.  Currently, LOCK TABLE only works on
tables, and pg_dump only applies it to tables.  If the offending
object had been a table rather than a view, pg_dump would (I believe)
have blocked trying to obtain an AccessShareLock against the existing
AccessExclusiveLock.  We talked about allowing locks on other types of
relations, but due to some bad syntax choices in the past it's not
completely obvious how to shoehorn that in.

http://archives.postgresql.org/pgsql-hackers/2011-06/msg00119.php

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-03 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sat, Oct 1, 2011 at 9:46 PM, Andrew Dunstan and...@dunslane.net wrote:
 How would that help? This isn't a lock failure.

 It is, rather, a failure to lock.  Currently, LOCK TABLE only works on
 tables, and pg_dump only applies it to tables.  If the offending
 object had been a table rather than a view, pg_dump would (I believe)
 have blocked trying to obtain an AccessShareLock against the existing
 AccessExclusiveLock.

Yeah, and it would still have failed once the lock was released.

Short of providing some sort of global DDL-blocking lock (with attendant
performance consequences) it's not clear how to create an entirely
bulletproof solution here.  This isn't a new problem --- we've been
aware of pg_dump's limitations in this area for many years.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-03 Thread Alvaro Herrera

Excerpts from Tom Lane's message of lun oct 03 01:47:18 -0300 2011:

 (Without cassert, it looks like LockReassignCurrentOwner is the next
 biggest time sink; I'm wondering if there's some sort of O(N^2) behavior
 in there.)

That seems fishy.  Even if there weren't quadratic behavior, should this
be called at all?  AFAIK it should only be used on cases where there are
subtransactions at work, and I don't think pg_dump uses them.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-03 Thread Andrew Dunstan



On 10/03/2011 12:47 AM, Tom Lane wrote:

Andrew Dunstanand...@dunslane.net  writes:

While investigating a client problem I just observed that pg_dump takes
a surprisingly large amount of time to dump a schema with a large number
of views. The client's hardware is quite spiffy, and yet pg_dump is
taking many minutes to dump a schema with some 35,000 views. Here's a
simple test case:
 create schema views;
 do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
 || i ||$$ as select current_date as d, current_timestamp as ts,
 $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
 loop; end;';
On my modest hardware this database took 4m18.864s for pg_dump to run.

It takes about that on my machine too ... with --enable-cassert.
oprofile said that 90% of the runtime was going into AllocSetCheck,
so I rebuilt without cassert, and the runtime dropped to 16 seconds.
What were you testing?


Yeah, you're right, that must have been it. That's a big hit, I didn't 
realise cassert was so heavy. (Note to self: test with production build 
settings). I don't seem to be batting 1000 ...


I still need to get to the bottom of why the client's machine is taking 
so long.


I do notice that we seem to be doing a lot of repetitive tasks, e.g. 
calling pg_format_type() over and over for the same arguments. Would we 
be better off cacheing that?


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-03 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Excerpts from Tom Lane's message of lun oct 03 01:47:18 -0300 2011:
 (Without cassert, it looks like LockReassignCurrentOwner is the next
 biggest time sink; I'm wondering if there's some sort of O(N^2) behavior
 in there.)

 That seems fishy.  Even if there weren't quadratic behavior, should this
 be called at all?  AFAIK it should only be used on cases where there are
 subtransactions at work, and I don't think pg_dump uses them.

I wondered that too, but the calls are legit --- they're coming from
PortalDrop.

It appears that most of the calls don't actually have anything to do,
but they're iterating through a rather large local lock table to find
that out.  We probably ought to think of a way to avoid that.  The trick
is to not make performance worse for typical small transactions that
aren't holding many locks (which I think was the design center for this
to begin with).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-02 Thread Joe Abbate
Hi Andrew,

On 10/01/2011 09:46 PM, Andrew Dunstan wrote:
 
 
 On 10/01/2011 05:48 PM, Joe Abbate wrote:
 On 10/01/2011 05:08 PM, Andrew Dunstan wrote:
 There is also this gem of behaviour, which is where I started:

 p1p2
 begin;
 drop view foo;
pg_dump
 commit;
boom.

 with this error:

 2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
 relation with OID 133640
 2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
 pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

 Of course, this isn't caused by having a large catalog, but it's
 terrible nevertheless. I'm not sure what to do about it.
 Couldn't you run pg_dump with --lock-wait-timeout?

 
 How would that help? This isn't a lock failure.

I misinterpreted the error.  I have confirmed the behavior you see.  I'm
somewhat surprised there appears to be no ability to lock a database
exclusively for something like pg_dump/pg_restore, so you're not
surprised by concurrent activity against the catalogs.  I'm guessing the
assumption is that MVCC will take care of that?

Joe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-02 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 While investigating a client problem I just observed that pg_dump takes 
 a surprisingly large amount of time to dump a schema with a large number 
 of views. The client's hardware is quite spiffy, and yet pg_dump is 
 taking many minutes to dump a schema with some 35,000 views. Here's a 
 simple test case:

 create schema views;
 do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
 || i ||$$ as select current_date as d, current_timestamp as ts,
 $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
 loop; end;';

 On my modest hardware this database took 4m18.864s for pg_dump to run. 

It takes about that on my machine too ... with --enable-cassert.
oprofile said that 90% of the runtime was going into AllocSetCheck,
so I rebuilt without cassert, and the runtime dropped to 16 seconds.
What were you testing?

(Without cassert, it looks like LockReassignCurrentOwner is the next
biggest time sink; I'm wondering if there's some sort of O(N^2) behavior
in there.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump issues

2011-10-01 Thread Andrew Dunstan
While investigating a client problem I just observed that pg_dump takes 
a surprisingly large amount of time to dump a schema with a large number 
of views. The client's hardware is quite spiffy, and yet pg_dump is 
taking many minutes to dump a schema with some 35,000 views. Here's a 
simple test case:


   create schema views;
   do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
   || i ||$$ as select current_date as d, current_timestamp as ts,
   $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
   loop; end;';


On my modest hardware this database took 4m18.864s for pg_dump to run. 
Should we be looking at replacing the retail operations which consume 
most of this time with something that runs faster?


There is also this gem of behaviour, which is where I started:

   p1p2
   begin;
   drop view foo;
  pg_dump
   commit;
  boom.

with this error:

   2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
   relation with OID 133640
   2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
   pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

Of course, this isn't caused by having a large catalog, but it's 
terrible nevertheless. I'm not sure what to do about it.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-01 Thread Joe Abbate
On 10/01/2011 05:08 PM, Andrew Dunstan wrote:
 While investigating a client problem I just observed that pg_dump takes
 a surprisingly large amount of time to dump a schema with a large number
 of views. The client's hardware is quite spiffy, and yet pg_dump is
 taking many minutes to dump a schema with some 35,000 views. Here's a
 simple test case:
 
create schema views;
do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
|| i ||$$ as select current_date as d, current_timestamp as ts,
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
loop; end;';
 
 
 On my modest hardware this database took 4m18.864s for pg_dump to run.
 Should we be looking at replacing the retail operations which consume
 most of this time with something that runs faster?

How modest?  Was there anything else in the database?  I tried with 9000
views (because I didn't want to bother increasing
max_locks_per_transaction) and the pg_dump in less than 10 seconds
(8.991s) redirecting (plain-text) output to a file (this is on a Core i5).

 There is also this gem of behaviour, which is where I started:
 
p1p2
begin;
drop view foo;
   pg_dump
commit;
   boom.
 
 with this error:
 
2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
relation with OID 133640
2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef
 
 Of course, this isn't caused by having a large catalog, but it's
 terrible nevertheless. I'm not sure what to do about it.

Couldn't you run pg_dump with --lock-wait-timeout?

Joe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-01 Thread Andrew Dunstan



On 10/01/2011 05:48 PM, Joe Abbate wrote:

On 10/01/2011 05:08 PM, Andrew Dunstan wrote:

While investigating a client problem I just observed that pg_dump takes
a surprisingly large amount of time to dump a schema with a large number
of views. The client's hardware is quite spiffy, and yet pg_dump is
taking many minutes to dump a schema with some 35,000 views. Here's a
simple test case:

create schema views;
do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
|| i ||$$ as select current_date as d, current_timestamp as ts,
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
loop; end;';


On my modest hardware this database took 4m18.864s for pg_dump to run.
Should we be looking at replacing the retail operations which consume
most of this time with something that runs faster?

How modest?  Was there anything else in the database?  I tried with 9000
views (because I didn't want to bother increasing
max_locks_per_transaction) and the pg_dump in less than 10 seconds
(8.991s) redirecting (plain-text) output to a file (this is on a Core i5).


Yeah, it must be pretty modest :-) On more powerful h/w I get the same. 
I need to dig further into why it's taking so long to dump my client's 
schema on server class hardware.



There is also this gem of behaviour, which is where I started:

p1p2
begin;
drop view foo;
   pg_dump
commit;
   boom.

with this error:

2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
relation with OID 133640
2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

Of course, this isn't caused by having a large catalog, but it's
terrible nevertheless. I'm not sure what to do about it.

Couldn't you run pg_dump with --lock-wait-timeout?



How would that help? This isn't a lock failure.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] - pg_dump issues

2002-09-11 Thread Oliver Elphick

On Wed, 2002-09-11 at 21:19, Tom Lane wrote:
 In the meantime, I think that we shouldn't mess with pg_dump's basically
 OID-order-driven dump ordering.  It works in normal cases, and adding
 arbitrary rules to it to fix one corner case is likely to accomplish
 little except breaking other corner cases.

I can see that Lamar and I are going to have major problems dealing with
users who fall over these problems.  There are some things that simply
cannot be handled automatically, such as user-written functions that
return opaque.  Then there are issues of ordering; and finally the fact
that we need to use the new pg_dump with the old binaries to get a
useful dump.

It seems to me that I shall have to make the new package such that it
can exist alongside the old one for a time, or else possibly separate
7.3 pg_dump and pg_dumpall into a separate package.  It is going to be a
total pain!

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 I am crucified with Christ; nevertheless I live; yet 
  not I, but Christ liveth in me; and the life which I 
  now live in the flesh I live by the faith of the Son 
  of God, who loved me, and gave himself for me.   
 Galatians 2:20 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] - pg_dump issues

2002-09-11 Thread Lamar Owen

On Wednesday 11 September 2002 05:40 pm, Oliver Elphick wrote:
 On Wed, 2002-09-11 at 21:19, Tom Lane wrote:
  In the meantime, I think that we shouldn't mess with pg_dump's basically
  OID-order-driven dump ordering.  It works in normal cases, and adding
  arbitrary rules to it to fix one corner case is likely to accomplish
  little except breaking other corner cases.

 I can see that Lamar and I are going to have major problems dealing with
 users who fall over these problems.

Yes, we are.  Thankfully, with RPM dependencies I can prevent blind upgrades. 
But that doe not help the data migration issue this release is going to be.  
Guys, migration that is this shabby is, well, shabby.  This _has_ to be fixed 
where a dump of 7.2.2 data (not 7.2.0, Tom) can be cleanly restored into 7.3.  
That is, after all, our only migration path.

I think this upgrade/migration nightmare scenario warrants upping the version 
to 8.0 to draw attention to the potential problems.

 It seems to me that I shall have to make the new package such that it
 can exist alongside the old one for a time, or else possibly separate
 7.3 pg_dump and pg_dumpall into a separate package.  It is going to be a
 total pain!

I had planned on making just such a 'pg_dump' package -- but if the 7.3 
pg_dump isn't going to produce useful output, it seems like a waste of time.

However, the jury is still out -- what sort of percentages are involved?  That 
is, how likely are problems going to happen?

Bruce, I mentioned a sed/perl/awk script already to massage the dump into a 
7.3-friendly form -- but we need to gather the cases that are involved.  
Methinks every single OpenACS installation will hit this issue.

How big is the problem?  It's looking bigger with each passing day, ISTM.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] - pg_dump issues

2002-09-11 Thread Lamar Owen

On Wednesday 11 September 2002 09:44 pm, Bruce Momjian wrote:
 Lamar Owen wrote:
  Bruce, I mentioned a sed/perl/awk script already to massage the dump into
  a 7.3-friendly form -- but we need to gather the cases that are involved.
  Methinks every single OpenACS installation will hit this issue.

  How big is the problem?  It's looking bigger with each passing day, ISTM.

 That is exactly what I want to know and document on the open items page.
 I am having trouble understanding some of the failures because no one is
 showing the failure messages/statements, just describing them.

Well, I am going to _try_ to lay aside an hour or two tomorrow or Friday and 
try to import a 7.2.2 OpenACS dump into a 7.3 installation.  I'll try to get 
very verbose with the errors... :-)
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] pg_dump issues

2001-01-25 Thread Rod Taylor

Could we add a flag to remove the postgres specific information from a
pg_dump?

--
UPDATE "pg_class" SET "reltriggers" = 0 WHERE "relname" ~*
'market_type';

BEGIN TRANSACTION;

CREATE TEMP TABLE "tr" ("tmp_relname" name, "tmp_reltriggers"
smallint);

INSERT INTO "tr" SELECT C."relname", count(T."oid") FROM "pg_class" C,
"pg_trigger" T WHERE C."oid" = T."tgrelid" AND C."relname" ~*
'market_type' GROUP BY 1;

UPDATE "pg_class" SET "reltriggers" = TMP."tmp_reltriggers" FROM "tr"
TMP WHERE "pg_class"."relname" = TMP."tmp_relname";

DROP TABLE "tr";

COMMIT TRANSACTION;

--

These make importing into other database systems rather difficult.

--
Rod Taylor

There are always four sides to every story: your side, their side, the
truth, and what really happened.


BEGIN:VCARD
VERSION:2.1
N:Taylor;Rod;B
FN:Taylor, Rod B
ORG:BarChord Entertainment Inc.;System Operation and Development
TITLE:Chief Technical Officer
ADR;WORK:;;;Toronto;Ontario;;Canada
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Toronto, Ontario=0D=0ACanada
X-WAB-GENDER:2
URL:
URL:http://www.barchord.com
BDAY:19790401
EMAIL;INTERNET:[EMAIL PROTECTED]
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
EMAIL;INTERNET:[EMAIL PROTECTED]
REV:20010124T194835Z
END:VCARD



Re: [HACKERS] pg_dump issues

2001-01-25 Thread Philip Warner

At 14:48 24/01/01 -0500, Rod Taylor wrote:
Could we add a flag to remove the postgres specific information from a
pg_dump?

It's easy enough to do, but removing all PG-specific information is
probably undesirable since, eg, pg_dump does not dump foreign key
constraints in a standard way (it just dumps the rules). pg_dump also
issues '\connect' statements, and will dump user-defined types, rules and
functions that will probably not conform to the SQL standard.

Over time I would like to see the output of pg_dump become as standard as
possible - but in the mean time, would people see any value in a flag to do
any of the following:

(a) disable modifications to system tables (eg. turning off triggers and
constraints) 

(b) just disable code which turns off the triggers? Or something else...

(c) something else


I can actually see value in (b) since people may want to load a single
table *and* have triggers operating.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/