Re: [GENERAL] dump to pg

2017-06-04 Thread Steven Chang
hello, not possible in your case. To my understading, no tool can directly transform binary dump(Oralce & MSSQL) to csv file format. Steven 2017-06-02 5:37 GMT+08:00 Nicolas Paris : > > If they aren't too big, you might get away by installing the express > edition

Re: [GENERAL] dump to pg

2017-06-02 Thread Neil Anderson
> > Thanks for the suggestion. Problem is the data is highly sensible and > cannot go on the cloud or non trusted place Sounds like the real question now is not how to import the data, but how to convert the backups you have to CSV or similar? Another idea for SQL Server is to use the bcp

Re: [GENERAL] dump to pg

2017-06-02 Thread Nicolas Paris
> Or spin up an AWS SQL Server instance: > > https://aws.amazon.com/windows/resources/amis/ > Thanks for the suggestion. Problem is the data is highly sensible and cannot go on the cloud or non trusted place -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make

Re: [GENERAL] dump to pg

2017-06-02 Thread Adrian Klaver
On 06/02/2017 09:31 AM, Neil Anderson wrote: On 1 June 2017 at 17:37, Nicolas Paris wrote: If they aren't too big, you might get away by installing the express edition of the respective DBMS, then import them using the native tools, then export the data as CSV files.

Re: [GENERAL] dump to pg

2017-06-02 Thread Neil Anderson
On 1 June 2017 at 17:37, Nicolas Paris wrote: >> If they aren't too big, you might get away by installing the express edition >> of the respective DBMS, then import them using the native tools, then export >> the data as CSV files. Good idea. I think SQL Server Express is

Re: [GENERAL] dump to pg

2017-06-01 Thread Nicolas Paris
> If they aren't too big, you might get away by installing the express edition > of the respective DBMS, then import them using the native tools, then export > the data as CSV files. Thanks Thomas. Both are binaries. The oracle's one is a 30TB database... -- Sent via pgsql-general mailing

Re: [GENERAL] dump to pg

2017-06-01 Thread Thomas Kellerer
Nicolas Paris schrieb am 31.05.2017 um 16:43: Hi, I have dumps from oracle and microsoft sql server (no more details). Is it possible to load them "directly" into postgres (without oracle/mssql license)? dump -> csv -> postgtres or something ? If those are binary dumps (e.g. a DataPump dump

Re: [GENERAL] dump to pg

2017-06-01 Thread Nicolas Paris
Thanks all, The point is I only have access to dump files, no ora/mssql server instance access. I have noticed the warning around legality on that question. The best solution for me is to ask to each. Once get answer, I will come back here to provide the answer. 2017-06-01 4:14 GMT-04:00 vinny

Re: [GENERAL] dump to pg

2017-06-01 Thread vinny
On 2017-05-31 16:43, Nicolas Paris wrote: Hi, I have dumps from oracle and microsoft sql server (no more details). Is it possible to load them "directly" into postgres (without oracle/mssql license)? dump -> csv -> postgtres or something ? Thanks a lot A very, *very* short trip to google

Re: [GENERAL] dump to pg

2017-05-31 Thread Rob Sargent
> On May 31, 2017, at 9:27 AM, David G. Johnston > wrote: > > On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote: > Hi, > > I have dumps from oracle and microsoft sql server (no more details). Is it > possible to load them "directly" into

Re: [GENERAL] dump to pg

2017-05-31 Thread David G. Johnston
On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote: > Hi, > > I have dumps from oracle and microsoft sql server (no more details). Is it > possible to load them "directly" into postgres (without oracle/mssql > license)? > dump -> csv -> postgtres > or something ? > >

[GENERAL] dump to pg

2017-05-31 Thread Nicolas Paris
Hi, I have dumps from oracle and microsoft sql server (no more details). Is it possible to load them "directly" into postgres (without oracle/mssql license)? dump -> csv -> postgtres or something ? Thanks a lot

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Melvin Davidson
On Tue, Oct 18, 2016 at 3:11 PM, Patrick B wrote: > Thank you guys... good to know that pg_dump does all the job for me :) > > So.. If I only dump using the --schema-only option, it will dump all the > schemas, constraints, indexes and tables? > > Because probably, I'll

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Patrick B
Thank you guys... good to know that pg_dump does all the job for me :) So.. If I only dump using the --schema-only option, it will dump all the schemas, constraints, indexes and tables? Because probably, I'll have to import the data manually. NOt in a single pg_restore I mean. (AWS issue)

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Jeff Janes
On Mon, Oct 17, 2016 at 8:32 PM, Patrick B wrote: > Hi guys, > > I need to export an entire database to another server, for testing purpose. > > Is there any way to export all indexes and constraints ? > Postgres 9.2 > Patrick > By default pg_dump will export the

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Condor
On 18-10-2016 06:32, Patrick B wrote: Hi guys, I need to export an entire database to another server, for testing purpose. Is there any way to export all indexes and constraints ? Postgres 9.2 Patrick Hello, pg_dump database is that you need, but if you asking do you can export data +

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-17 Thread Adrian Klaver
On 10/17/2016 08:32 PM, Patrick B wrote: Hi guys, I need to export an entire database to another server, for testing purpose. Is there any way to export all indexes and constraints ? pg_dump the_database Postgres 9.2 Patrick -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via

Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-17 Thread John R Pierce
On 10/17/2016 8:32 PM, Patrick B wrote: I need to export an entire database to another server, for testing purpose. Is there any way to export all indexes and constraints ? Postgres 9.2 on the existing machine, as the postgres user pg_dump -Fc -d databasename -f filename.pgdump then,

[GENERAL] Dump all the indexes/constraints/roles

2016-10-17 Thread Patrick B
Hi guys, I need to export an entire database to another server, for testing purpose. Is there any way to export all indexes and constraints ? Postgres 9.2 Patrick

[GENERAL] dump postgres cluster with different encodings

2015-03-05 Thread basti
Hello I have a postgres cluster with following databases davical| pgadmin | UTF8 | en_US.utf8 | en_US.utf8 | test| test | LATIN9| en_US.iso885915 | en_US.iso885915 | foo| postgres | SQL_ASCII |

Re: [GENERAL] dump postgres cluster with different encodings

2015-03-05 Thread basti
Whithout -E there was the same error. The DB was create with: create database davical WITH ENCODING = 'UTF8' LC_CTYPE='en_US.utf8' LC_COLLATE='en_US.utf8' TEMPLATE template0; I have fix the error with localedef -f UTF-8 -i en_US en_US.UTF-8 After a restart of postgres all is fine. Thanks a

Re: [GENERAL] dump postgres cluster with different encodings

2015-03-05 Thread Adrian Klaver
On 03/05/2015 02:48 AM, basti wrote: Hello I have a postgres cluster with following databases davical| pgadmin | UTF8 | en_US.utf8 | en_US.utf8 | test| test | LATIN9| en_US.iso885915 | en_US.iso885915 | foo

[GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey
I have a database with the following structure: Create table bar... Create function subset_of_bar ... (which does a select on a subset of bar) Create table foo... Alter table foo add constraint mycheck check subset_of_bar(id); I pg_dumped my database, and tried to pg_restore it on another

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Shaun Thomas
On 08/07/2014 10:00 AM, Chris Curvey wrote: I’ve done some searching and am coming up empty. Is there a way to get pg_restore to apply constraints AFTER loading all the tables Kinda. PostgreSQL applies constraints with hidden system-level triggers. An easy way to turn them off is to use

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Shaun Thomas
On 08/07/2014 01:09 PM, Chris Curvey wrote: The disable trigger statement runs without error, but does not seem to have any effect. :( Apparently this trick only works for disabling foreign keys. I'm not sure how to temporarily disable check constraints. You might have to drop the

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey
-Original Message- From: Shaun Thomas [mailto:stho...@optionshouse.com] Sent: Thursday, August 07, 2014 12:43 PM To: Chris Curvey; pgsql-general@postgresql.org Subject: Re: [GENERAL] dump/restore with a hidden dependency? On 08/07/2014 10:00 AM, Chris Curvey wrote: I've done

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Tom Lane
Chris Curvey ccur...@zuckergoldberg.com writes: I have a database with the following structure: Create table bar... Create function subset_of_bar ... (which does a select on a subset of bar) Create table foo... Alter table foo add constraint mycheck check subset_of_bar(id); Basically, that's

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey
-Original Message- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: Thursday, August 07, 2014 2:50 PM To: Chris Curvey Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] dump/restore with a hidden dependency? Chris Curvey ccur...@zuckergoldberg.com writes: I have a database

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread David G Johnston
Chris Curvey-3 wrote -Original Message- From: Tom Lane [mailto: tgl@.pa ] Sent: Thursday, August 07, 2014 2:50 PM To: Chris Curvey Cc: pgsql-general@ Subject: Re: [GENERAL] dump/restore with a hidden dependency? Chris Curvey lt; ccurvey@ gt; writes: I have a database

Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Kevin Grittner
Chris Curvey ccur...@zuckergoldberg.com wrote: Perhaps a pair of triggers?  An insert-or-update trigger on foo, and a delete-or-update trigger on bar? Using a foreign key constraint is best if that can do the right thing.  If that doesn't work, triggers like you describe are probably the best

Re: [GENERAL] Dump Database

2014-03-20 Thread José Pedro Santos
Hi Dennis, I already manage. The problem as because I wasn't in the right folder...rookie mistake. Thank you for your email. Regards, José Santos From: den...@kabonkulator.com To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Dump Database Date: Mon, 17 Mar 2014 08:14:03 -0500

[GENERAL] Dump Database

2014-03-17 Thread José Pedro Santos
Dear all, I'm trying to dump my database using the following command after I enter as su postgres: pg_dump - U postgres mydb -f mydb.sql ... but I'm always having the following message: pg_dump [archiver] Could not open output file mydb.sql : Permission denied I try to use the same

Re: [GENERAL] Dump Database

2014-03-17 Thread Martin French
pgsql-general-ow...@postgresql.org wrote on 17/03/2014 12:50:20: From: José Pedro Santos zpsant...@hotmail.com To: Postgres Ajuda pgsql-general@postgresql.org, Date: 17/03/2014 12:56 Subject: [GENERAL] Dump Database Sent by: pgsql-general-ow...@postgresql.org Dear all, I'm trying

Re: [GENERAL] Dump Database

2014-03-17 Thread Dennis
Of José Pedro Santos Sent: Monday, March 17, 2014 7:50 AM To: Postgres Ajuda Subject: [GENERAL] Dump Database Dear all, I'm trying to dump my database using the following command after I enter as su postgres: pg_dump - U postgres mydb -f mydb.sql ... but I'm always having the following message

Re: [GENERAL] Dump/Reload pg_statistic to cut time from pg_upgrade?

2013-09-03 Thread Bruce Momjian
On Sun, Sep 1, 2013 at 10:35:57AM -0700, Jeff Davis wrote: [ late response, but might still be useful to someone ] You can work around the problem with a little effort if you call array_in directly. It takes the type output (cstring), element type (oid), and element typmod (integer). To

Re: [GENERAL] Dump/Reload pg_statistic to cut time from pg_upgrade?

2013-09-01 Thread Jeff Davis
On Mon, 2013-07-08 at 14:20 -0500, Jerry Sievers wrote: Planning to pg_upgrade some large (3TB) clusters using hard link method. Run time for the upgrade itself takes around 5 minutes. Nice!! Origin version 8.4 and destination version 9.1. Unfortunately the post-upgrade analyze of the

Re: [GENERAL] Dump file created with pg_dump cannot be restored with psql

2013-08-09 Thread gilroy
For that matter, for the first time we tried enforcing some of the rules of CFs this time, and I'd like to hear if people think that helped. I think he merit of fast promote is - allowing quick connection by skipping checkpoint and its demerit is - taking little bit longer when crash-recovery

[GENERAL] Dump file created with pg_dump cannot be restored with psql

2013-08-02 Thread tot-to
I have two installation of postgresql-server-9.2.4 on Gentoo. I try to just copy database from one to another. According to the documentation http://www.postgresql.org/docs/9.2/static/backup-dump.html I created dump file: psql -U role database dumpfile.sql copied it to another machine,

Re: [GENERAL] Dump file created with pg_dump cannot be restored with psql

2013-08-02 Thread Adrian Klaver
On 08/02/2013 05:03 PM, tot-to wrote: I have two installation of postgresql-server-9.2.4 on Gentoo. I try to just copy database from one to another. According to the documentation http://www.postgresql.org/docs/9.2/static/backup-dump.html I created dump file: psql -U role database

Re: [GENERAL] Dump file created with pg_dump cannot be restored with psql

2013-08-02 Thread tot-to
Oh, sorry. I mixed up dumps... I am migrating from mysql and by mistake I tried to apply dump from mysqldump --compat=postgresql that was named very similar to dump of finally converted database produced by pg_dump (for the purpose of copy from test to main server). Bash comletitions and then

[GENERAL] Dump/Reload pg_statistic to cut time from pg_upgrade?

2013-07-09 Thread Jerry Sievers
Originally posted on pg_admin but got no bites there... Admins; Planning to pg_upgrade some large (3TB) clusters using hard link method. Run time for the upgrade itself takes around 5 minutes. Nice!! Origin version 8.4 and destination version 9.1. Unfortunately the post-upgrade analyze of the

[GENERAL] Dump table from schema and function from another one

2013-03-19 Thread Luke Luke
Hi guys, Is it possible to do a postgre dump where I will dump from database complete schema[A] (table+data,procedures) and from another schema[B] only stored procedures by one dump commnad? Thanks

Re: [GENERAL] Dump table from schema and function from another one

2013-03-19 Thread Adrian Klaver
On 03/19/2013 01:02 PM, Luke Luke wrote: Hi guys, Is it possible to do a postgre dump where I will dump from database complete schema[A] (table+data,procedures) and from another schema[B] only stored procedures by one dump commnad? AFAIK there is no way to dump just stored

Re: [GENERAL] dump order by

2012-12-24 Thread Jose Soares
I'm sorry my first example was incomplete I need to migrate data from postgresql to oracle thus I have to use dump --column-inserts instead of copy to have an output like this but order by pk: INSERT INTO test (id, note, id_father) VALUES (6, 'Homer Simpson ', 5); INSERT INTO test (id, note,

[GENERAL] dump order by

2012-12-22 Thread jo
Hi all, I would like to know if it is possible to dump a table ordered by its primary key. Take a look at the this test table... \d test Table public.test Column| Type | Modifiers ---+-+--- id

Re: [GENERAL] dump order by

2012-12-22 Thread salah jubeh
jose.soa...@sferacarta.com To: pgsql-general@postgresql.org Sent: Saturday, December 22, 2012 6:29 PM Subject: [GENERAL] dump order by Hi all, I would like to know if it is possible to dump a table ordered by its primary key. Take a look at the this test table... \d test                         Table

Re: [GENERAL] dump order by

2012-12-22 Thread Adrian Klaver
On 12/22/2012 09:29 AM, jo wrote: Hi all, I would like to know if it is possible to dump a table ordered by its primary key. Take a look at the this test table... \d test Table public.test Column| Type | Modifiers

Re: [GENERAL] Dump/restore indexes and functions in public schema

2012-10-09 Thread marian krucina
Example: in PG91: CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$ LANGUAGE SQL; CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT function_y($1) $$ LANGUAGE SQL; CREATE SCHEMA schema_a; CREATE TABLE schema_a.table_a(i INT); CREATE INDEX ON

Re: [GENERAL] Dump/restore indexes and functions in public schema

2012-10-09 Thread Tom Lane
marian krucina marian.kruc...@gmail.com writes: Example: in PG91: CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$ LANGUAGE SQL; CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT function_y($1) $$ LANGUAGE SQL; CREATE SCHEMA schema_a; CREATE TABLE

Re: [GENERAL] Dump/restore indexes and functions in public schema

2012-10-08 Thread Tom Lane
marian krucina marian.kruc...@gmail.com writes: pg_upgrade failed on own server, because we used functions from public schema in index. We install common functions (e.g. postgresql contrib) to public schema. Tables and indexes are in another schema, and names of functions without a schema

Re: [GENERAL] Dump functions alone

2012-02-16 Thread Jan Otto
Any help in getting function argument names is appreciated. Thank you take a look at pg_catalog.pg_get_function_arguments(oid) regards, jan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:

[GENERAL] Dump functions alone

2012-02-15 Thread Rajan, Pavithra
Hello all- Is there a way to just dump functions in a schema in to a txt file/ sql file ? Thank you.

Re: [GENERAL] Dump functions alone

2012-02-15 Thread Raghavendra
You have two options. - Use contrib module pg_extractor https://github.com/omniti-labs/pg_extractor - Use pg_proc catalog to get function definition --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ On Wed, Feb 15, 2012 at 6:59 PM, Rajan, Pavithra

Re: [GENERAL] Dump functions alone

2012-02-15 Thread Raghavendra
One more thing you can also get it from pg_get_functiondef() system function. --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra raghavendra@enterprisedb.com wrote: You have two options. - Use contrib

Re: [GENERAL] Dump functions alone

2012-02-15 Thread Rajan, Pavithra
@postgresql.org Subject: Re: [GENERAL] Dump functions alone One more thing you can also get it from pg_get_functiondef() system function. --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra raghavendra

Re: [GENERAL] Dump functions alone

2012-02-15 Thread Rajan, Pavithra
: [GENERAL] Dump functions alone One more thing you can also get it from pg_get_functiondef() system function. --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra raghavendra@enterprisedb.com wrote: You

[GENERAL] Dump Error Message

2011-11-04 Thread Bob Pawley
Hi I am attempting to dump a database using PostgreDAC. I am getting the following error message which I don’t understand. Can someone shed some light on this? “Error message from server: ERROR: column tgisconstraint does not exist LINE 1: ...c AS tgfname, tgtype, tgnargs, tgargs, tgenabled,

Re: [GENERAL] Dump Error Message

2011-11-04 Thread John R Pierce
On 11/04/11 10:22 AM, Bob Pawley wrote: I am attempting to dump a database using PostgreDAC. this postgresDAC? http://www.microolap.com/products/connectivity/postgresdac/ thats a commercial product, you probably should contact them for support. -- john r pierceN

Re: [GENERAL] Dump Error Message

2011-11-04 Thread Tom Lane
Bob Pawley rjpaw...@shaw.ca writes: I am attempting to dump a database using PostgreDAC. I am getting the following error message which I don’t understand. Can someone shed some light on this? Error message from server: ERROR: column tgisconstraint does not exist The

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-19 Thread Adrian Klaver
On Thursday, August 18, 2011 3:25:59 pm Dmitry Koterov wrote: Mmm, --disable-triggers is not surely enough - we also have RULEs and (much worse) INDEXes. If we create all indices and then restore all data, it is MUCH SLOWER than restore the data first and then - create all indices. So I

[GENERAL] Dump database roles for full restoration

2011-08-19 Thread Diego Augusto Molina
Short description: * Need: migration of database, with roles relevant to it. * From server: pg 8.1; no superuser access. * To server: pg 8.4; full access. * Database in question: depends on few roles, mainly group roles used to set permission; there are other roles (login roles) which are members

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-19 Thread Dimitri Fontaine
Dmitry Koterov dmi...@koterov.ru writes: Is there any way (or hack) to dump the whole database, but to exclude the DATA from a table within this dump? (DDL of the table should not be excluded: after restoring the data the excluded table should look empty.) The pg_staging tool allows you to do

[GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Hello. Is there any way (or hack) to dump the whole database, but to exclude the DATA from a table within this dump? (DDL of the table should not be excluded: after restoring the data the excluded table should look empty.) I see -T switch of pg_dump, but seems -T excludes the data AND the DDL of

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Adrian Klaver
On Thursday, August 18, 2011 12:49:45 pm Dmitry Koterov wrote: Hello. Is there any way (or hack) to dump the whole database, but to exclude the DATA from a table within this dump? (DDL of the table should not be excluded: after restoring the data the excluded table should look empty.) I

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
1. I need to shorten pg_dump results (for backup purposes), so pg_restore is too late for that... 2. If I use pg_dump -s separately, the data may not load (or load to slow) after that, because all indices/foreign keys are already there. Is there a way to split pg_dump -s into 2 parts: the first

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Adrian Klaver
On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote: 1. I need to shorten pg_dump results (for backup purposes), so pg_restore is too late for that.. 2. If I use pg_dump -s separately, the data may not load (or load to slow) after that, because all indices/foreign keys are already

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Thanks, pg_dump --data-only --disable-triggers is the king. (Unfortunately it is not supported by pg_dumpall, but it is entirely another story. :-) On Fri, Aug 19, 2011 at 12:36 AM, Adrian Klaver adrian.kla...@gmail.comwrote: On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote: 1.

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Mmm, --disable-triggers is not surely enough - we also have RULEs and (much worse) INDEXes. If we create all indices and then restore all data, it is MUCH SLOWER than restore the data first and then - create all indices. So I think that there is no work-around really... I propose to include an

[GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Condor
Hello ppl, can I ask how to dump large DB ? I read documentation but I has a problem with split that was year ago and did not use it after then. Problem was when I start: pg_dump dbname | split -b 1G - filename I unable to restore it correct. When I start restore DB i got error from sql he

Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Christian Ullrich
* Condor wrote: Problem was when I start: pg_dump dbname | split -b 1G - filenameI unable to restore it correct. When I start restore DB i got error from sql he did not like one line. I make investigation and the problem was in last line of first file value field was something like 'This is a '

Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Craig Ringer
On 5/07/2011 5:00 PM, Condor wrote: Hello ppl, can I ask how to dump large DB ? Same as a smaller database: using pg_dump . Why are you trying to split your dumps into 1GB files? What does that gain you? Are you using some kind of old file system and operating system that cannot handle

Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Condor
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote: On 5/07/2011 5:00 PM, Condor wrote: Hello ppl, can I ask how to dump large DB ? Same as a smaller database: using pg_dump . Why are you trying to split your dumps into 1GB files? What does that gain you? Are you using some kind of old

Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Tomas Vondra
Dne 5.7.2011 13:31, Condor napsal(a): On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote: On 5/07/2011 5:00 PM, Condor wrote: Hello ppl, can I ask how to dump large DB ? Same as a smaller database: using pg_dump . Why are you trying to split your dumps into 1GB files? What does that

Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread John R Pierce
On 07/05/11 4:31 AM, Condor wrote: Are you using some kind of old file system and operating system that cannot handle files bigger than 2GB? If so, I'd be pretty worried about running a database server on it. Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was truncated.

[GENERAL] dump restore to different schema

2011-05-18 Thread Craig de Stigter
Hi list We're writing a plugin for our website that loads single-table database dumps created by untrusted users. My question is two-fold: 1. I'm assuming that the dump format can contain arbitrary sql commands, so a pg_restore of this nature should be run under an untrusted account in its own

Re: [GENERAL] dump restore to different schema

2011-05-18 Thread Craig Ringer
On 05/19/2011 10:10 AM, Craig de Stigter wrote: 1. I'm assuming that the dump format can contain arbitrary sql commands, so a pg_restore of this nature should be run under an untrusted account in its own restricted schema. Can someone confirm that this is the case? Correct. You very

[GENERAL] dump error parent table oid of pg_rewrite entry oid not found

2010-04-07 Thread Doug Kyle
I have this error from pg_dumpall: pg_dump: failed sanity check, parent table OID 27974 of pg_rewrite entry OID 28689 not found I found a rule was dropped when this server was setup as s Slony replica because it conflicted with the Slony deny access trigger select oid,rulename from

Re: [GENERAL] dump error parent table oid of pg_rewrite entry oid not found

2010-04-07 Thread Tom Lane
Doug Kyle dk...@grpl.org writes: I have this error from pg_dumpall: pg_dump: failed sanity check, parent table OID 27974 of pg_rewrite entry OID 28689 not found ... So I'm thinking I'll delete from pg_rewrite where oid=28689, but I'm not sure if I should do anything with pg_class or

Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
@postgresql.org Subject: Re: [GENERAL] dump of 700 GB database Hello 2010/2/10 karsten vennemann kars...@terragis.net I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed

Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread Scott Marlowe
On Wed, Feb 17, 2010 at 3:44 PM, karsten vennemann kars...@terragis.net wrote: vacuum should clean out the dead tuples, then cluster on any large tables that are bloated will sort them out without needing too much temporary space. Yes ok  am running a vacuum full on a large table

Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
Note that cluster on a randomly ordered large table can be prohibitively slow, and it might be better to schedule a short downtime to do the following (pseudo code) alter table tablename rename to old_tablename; create table tablename like old_tablename; insert into tablename select *

[GENERAL] dump of 700 GB database

2010-02-09 Thread karsten vennemann
I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and and the split

[GENERAL] dump of 700 GB database

2010-02-09 Thread karsten vennemann
I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and and the

Re: [GENERAL] dump of 700 GB database

2010-02-09 Thread Pavel Stehule
Hello 2010/2/10 karsten vennemann kars...@terragis.net I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should

Re: [GENERAL] dump of 700 GB database

2010-02-09 Thread John R Pierce
karsten vennemann wrote: I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of

[GENERAL] dump time increase by 1h with new kernel

2009-09-27 Thread Justin Pryzby
When we upgraded from linux-2.6.24 to ./linux-2.6.27, our pg_dump duration increased by 20%. My first attempt at resolution was to boot with elevator=deadline. However that's actually the default IO scheduler in both kernels. The two dmesg's are at:

[GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
By bad data, I mean a character that's not UTF8, such as hex 98. As far as I can tell, pg_dump is the tool to use. But it has serious drawbacks. If you dump in the custom format, the data is compressed (nice) and includes large objects (very nice). But, from my tests and the postings of others,

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Tom Lane
John T. Dow [EMAIL PROTECTED] writes: If you dump in plain text format, you can at least inspect the dumped data and fix it manually or with iconv. But the plain text format doesn't support large objects (again, not nice). It does in 8.1 and later ... Also, neither of these methods gets

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Tom My mistake in not realizing that 8.1 and later can dump large objects in the plain text format. I guess when searching for answers to a problem, the posted information doesn't always specify the version. So, sorry about that. But the plain text format still has serious problems in that the

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 10:21:54 -0400 John T. Dow [EMAIL PROTECTED] wrote: By bad data, I mean a character that's not UTF8, such as hex 98. As far as I can tell, pg_dump is the tool to use. But it has serious drawbacks. If you dump in the custom format, the data is compressed (nice) and

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Joshua The TOC feature sounds good, as does converting a single table to plain text. But I can't find documentation for the TOC feature under pg_dump or pg_restore. I'm looking in postgresql-8.2.1-US.pdf. Neither could I see anything about converting a single table to a plain text dump.

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 13:37:13 -0400 John T. Dow [EMAIL PROTECTED] wrote: Joshua The TOC feature sounds good, as does converting a single table to plain text. But I can't find documentation for the TOC feature under pg_dump or pg_restore. I'm looking in postgresql-8.2.1-US.pdf. The

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Joshua Thank you very much for answering these various questions. I guess the compressed format is the best overall solution, except for roles. I find myself having a table with other information about users (application specific user type, etc) so perhaps the thing to do is record enough

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 17:05:53 -0400 John T. Dow [EMAIL PROTECTED] wrote: Joshua Thank you very much for answering these various questions. I guess the compressed format is the best overall solution, except for roles. I find myself having a table with other information about users

Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Sorry, I missed that. Thanks again. Now to put this all into effect. John On Mon, 25 Aug 2008 14:25:12 -0700, Joshua Drake wrote: On Mon, 25 Aug 2008 17:05:53 -0400 John T. Dow [EMAIL PROTECTED] wrote: Joshua Thank you very much for answering these various questions. I guess the

Re: [GENERAL] Dump and restore problem

2008-06-22 Thread David
On Sun, Jun 22, 2008 at 3:11 AM, Stuart Luppescu [EMAIL PROTECTED] wrote: On 木, 2008-06-19 at 11:57 +0200, David wrote: pg_restore: [tar archiver] could not open TOC file for input: No such file or directory It sounds like the tar file is no longer being created. Try manually running the

Re: [GENERAL] Dump and restore problem

2008-06-21 Thread Stuart Luppescu
On 木, 2008-06-19 at 11:57 +0200, David wrote: pg_restore: [tar archiver] could not open TOC file for input: No such file or directory It sounds like the tar file is no longer being created. Try manually running the commands, and verify that the dump, restore, and rsync still work

Re: [GENERAL] Dump and restore problem

2008-06-21 Thread Martin
@postgresql.org Sent: Saturday, June 21, 2008 9:11 PM Subject: Re: [GENERAL] Dump and restore problem On 木, 2008-06-19 at 11:57 +0200, David wrote: pg_restore: [tar archiver] could not open TOC file for input: No such file or directory It sounds like the tar file is no longer being created. Try manually

Re: [GENERAL] Dump and restore problem

2008-06-19 Thread David
On Wed, Jun 18, 2008 at 11:41 PM, Stuart Luppescu [EMAIL PROTECTED] wrote: [...] pg_restore: [tar archiver] could not open TOC file for input: No such file or directory It sounds like the tar file is no longer being created. Try manually running the commands, and verify that the dump,

[GENERAL] Dump and restore problem

2008-06-18 Thread Stuart Luppescu
Hello, I had a very nice system where I mirrored everything to another machine each night, so in case of disaster, I could easily switch over to the mirror. The backup script uses a line like this: pg_dump -b -F t -h $postgresql_hostname $i $location_backup_dir/`date

  1   2   3   >