Re: [GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

On 11/16/2017 03:13 PM, bricklen wrote:


On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson > wrote:


v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump,
when is the data actually loaded?  I've looked in the list output and
don't see any "load" statements.


Look for COPY lines, that's how the data is restored.


$ pg_restore -l CDSHA01.dump > CDSHA01.txt
$ grep --color -i copy CDSHA01.txt
$ echo $?
1

There are lots of "restoring data", though.  I should have thought to grep 
for that.


One thing that puzzles me is how fast the tables (even large ones) loaded 
compared to how slow the pg_dump -Fc was.  Granted, I'm running -j4 but 
still, these were some really large, poorly compressible tables (the dump 
file was about as big as du -mc data/base).


--
World Peace Through Nuclear Pacification



Re: [GENERAL] pg_restore load data

2017-11-16 Thread melvin6925
The command is COPY, not load.


Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone
 Original message From: Ron Johnson <ron.l.john...@cox.net> 
Date: 11/16/17  16:07  (GMT-05:00) To: pgsql-general@postgresql.org Subject: 
[GENERAL] pg_restore load data 
Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.

Thanks

-- 
World Peace Through Nuclear Pacification



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore load data

2017-11-16 Thread bricklen
On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson  wrote:

> v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)
>
> During a "whole database" restore using pg_restore of a custom dump, when
> is the data actually loaded?  I've looked in the list output and don't see
> any "load" statements.
>

Look for COPY lines, that's how the data is restored.


[GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore error -- missing pg table

2017-09-11 Thread

--- t...@sss.pgh.pa.us wrote:

From: Tom Lane <t...@sss.pgh.pa.us>
To: "" <kbran...@pwhome.com>
cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore error -- missing pg table
Date: Mon, 11 Sep 2017 13:43:16 -0400

"" <kbran...@pwhome.com> writes:
> /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms 
> --schema=public .
> ...
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> _pg_sv_column_array(oid, smallint[]) does not exist
> LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column...
> ^
> HINT:  No function matches the given name and argument types. You might need 
> to add explicit type casts.

Hm, so what is _pg_sv_column_array()?  There's no built-in function by
that name.

The most likely theory is that pg_dump didn't dump that function, or
it did but pg_restore isn't restoring it, perhaps because of the --schema
restriction.  I'm not sure why the function name isn't showing up as
schema-qualified, though, if it isn't in the public schema.

regards, tom lane

===

Ah, so I made the assumption that because it started with "pg_" that it was a 
built-in and that was false. :(

Tracking that down, it seems that came from the extension "pgtap". So I'll work 
on dealing with that.

It seems that pgtap loaded itself in public (instead of the common schema as it 
should have). I have no idea why things aren't in the right order though.

Thanks for the pointer! I think that will allow us to get it all going again.

Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore error -- missing pg table

2017-09-11 Thread Tom Lane
""  writes:
> /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms 
> --schema=public .
> ...
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> _pg_sv_column_array(oid, smallint[]) does not exist
> LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column...
> ^
> HINT:  No function matches the given name and argument types. You might need 
> to add explicit type casts.

Hm, so what is _pg_sv_column_array()?  There's no built-in function by
that name.

The most likely theory is that pg_dump didn't dump that function, or
it did but pg_restore isn't restoring it, perhaps because of the --schema
restriction.  I'm not sure why the function name isn't showing up as
schema-qualified, though, if it isn't in the public schema.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore error -- missing pg table

2017-09-11 Thread
We've been doing backups and restores with many tests and uses and never had a 
problem until now. Sadly, I'm not sure where to even begin looking to figure 
out what to do, so any pointers would be very much appreciated!

The vital stats:
Pg: 9.5.1  (this will get upgraded to 10.0 when it's available)
OS: Centos 5.9 (the upgrade to 6.x is scheduled for real soon now)

For a little background. Our DB is broken into 3 schemas: public, logging, 
common. We do that because logging data can get quite large and is not the data 
that is needed if something goes wrong and we need to restore -- that data sits 
in public. The common schema holds the few functions that both need so a drop 
of public or logging doesn't hurt the other. I like to think this is all pretty 
straight forward with no surprises.

We do backups like this (simplified):

cd $EXP
rm -f $EXP/*
$PGPATH/pg_dump --clean --create --format=d --jobs=2 --file=$EXP --dbname=nms 
public
/bin/tar -czf $TARNAME .

We do the restore like this (simplified):

cd $EXP
rm -f $EXP/*
tar -xzf $TARNAME
# stop services & users
# rename schema in case we need to restore
# create an empty schema to restore into
$PGPATH/pg_restore $VERBOSE --jobs=2 --dbname=nms public .
# check for errors and restore saved schema if required

The error we're getting, with supporting SQL, looks like:

DROP SCHEMA IF EXISTS savepublic CASCADE;
ALTER SCHEMA public RENAME TO savepublic;
CREATE SCHEMA public AUTHORIZATION nmsroot;

/opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms --schema=public .

...

pg_restore: processing item 446 VIEW pg_all_foreign_keys
pg_restore: creating VIEW "public.pg_all_foreign_keys"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 446; 1259 136598 VIEW 
pg_all_foreign_keys nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  function 
_pg_sv_column_array(oid, smallint[]) does not exist
LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column...
^
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.
Command was: CREATE VIEW pg_all_foreign_keys AS
 SELECT n1.nspname AS fk_schema_name,
c1.relname AS fk_table_name,
k1.conname AS ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"pg_all_foreign_keys" does not exist
Command was: ALTER TABLE pg_all_foreign_keys OWNER TO nmsroot;

pg_restore: processing item 6841 ACL pg_all_foreign_keys

...

pg_restore: setting owner and privileges for ACL "public.pg_all_foreign_keys"
pg_restore: [archiver (db)] Error from TOC entry 6841; 0 0 ACL 
pg_all_foreign_keys nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"pg_all_foreign_keys" does not exist
Command was: REVOKE ALL ON TABLE pg_all_foreign_keys FROM PUBLIC;
REVOKE ALL ON TABLE pg_all_foreign_keys FROM nmsroot;
GRANT ALL ON TABL...

...

WARNING: errors ignored on restore: 3

DROP SCHEMA IF EXISTS public CASCADE;
ALTER SCHEMA savepublic RENAME TO public;

Error: Problem with pg_restore, reverted to saved database copy.



I don't understand why pg_all_foreign_keys is having issues here, nor even what 
to start investigating. To the best of my knowledge, the server never ran out 
of disk space so it should be a complete backup. None of the files from the 
backup are corrupt that I can tell. The server and database both seem fine 
(other than the missing data that was accidentally dropped and is forcing this 
restore).

What am I missing and what do I need to investigate? Has anyone else ever seen 
this before and if so what did you do to fix it?

Thanks!
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore misuse or bug?

2017-07-24 Thread Jordan Gigov
Assuming you have a user called "test", this will create a database that
suffers from this problem.

create database mvtest;
\c mvtest
create table main_table (id serial not null, something varchar(20), primary
key (id));
create table child_table (id serial not null, parent_id int not null,
somedate date not null, someval int not null, primary key(id), foreign
key(parent_id) references main_table(id));

insert into main_table(something) values('X-Men'),('Batman');
insert into child_table(parent_id, somedate, someval)
values(2,'1989-06-23',10),(2,'1992-06-19',4),(1,'2000-07-14',13),(1,'2014-05-23',16);

CREATE MATERIALIZED VIEW movie_things AS
SELECT mt.*, jsonb_object(array_agg(ct.somedate)::text[],
array_agg(ct.someval)::text[]) AS release_prizes FROM main_table mt
LEFT JOIN child_table ct ON (mt.id = ct.parent_id) GROUP BY mt.id;
CREATE UNIQUE INDEX IF NOT EXISTS movie_things_id_idx ON movie_things USING
btree (id);
ALTER MATERIALIZED VIEW movie_things CLUSTER ON movie_things_id_idx, OWNER
TO test;

On 21 July 2017 at 17:25, Jordan Gigov  wrote:

> This is on version 9.5, 9.6 and 10beta2. I could probably make a
> test-case over the weekend if I'm at home.
>
> On 21 July 2017 at 17:03, Tom Lane  wrote:
> > Jordan Gigov  writes:
> >> When running pg_restore as the superuser it gives the following error
> >> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> >> pg_restore: [archiver (db)] Error from TOC entry 3253; 0 320628
> >> MATERIALIZED VIEW DATA combined_query_data web_user
> >> pg_restore: [archiver (db)] could not execute query: ERROR:
> >> permission denied for relation first_table_in_from_list
> >> Command was: REFRESH MATERIALIZED VIEW combined_query_data;
> >
> > What PG version is this?  Can you provide a self-contained test case?
> >
> >> I see no reason why the superuser would get a "permission denied"
> >> error.
> >
> > Matview queries are run as the owner of the matview, so this isn't
> > as surprising as all that.  But if the matview works in your normal
> > usage, then pg_dump must be doing something wrong, perhaps emitting
> > grants in the wrong order.
> >
> > regards, tom lane
>


Re: [GENERAL] pg_restore misuse or bug?

2017-07-21 Thread Jordan Gigov
This is on version 9.5, 9.6 and 10beta2. I could probably make a
test-case over the weekend if I'm at home.

On 21 July 2017 at 17:03, Tom Lane  wrote:
> Jordan Gigov  writes:
>> When running pg_restore as the superuser it gives the following error
>> pg_restore: [archiver (db)] Error while PROCESSING TOC:
>> pg_restore: [archiver (db)] Error from TOC entry 3253; 0 320628
>> MATERIALIZED VIEW DATA combined_query_data web_user
>> pg_restore: [archiver (db)] could not execute query: ERROR:
>> permission denied for relation first_table_in_from_list
>> Command was: REFRESH MATERIALIZED VIEW combined_query_data;
>
> What PG version is this?  Can you provide a self-contained test case?
>
>> I see no reason why the superuser would get a "permission denied"
>> error.
>
> Matview queries are run as the owner of the matview, so this isn't
> as surprising as all that.  But if the matview works in your normal
> usage, then pg_dump must be doing something wrong, perhaps emitting
> grants in the wrong order.
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore misuse or bug?

2017-07-21 Thread Tom Lane
Jordan Gigov  writes:
> When running pg_restore as the superuser it gives the following error
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 3253; 0 320628
> MATERIALIZED VIEW DATA combined_query_data web_user
> pg_restore: [archiver (db)] could not execute query: ERROR:
> permission denied for relation first_table_in_from_list
> Command was: REFRESH MATERIALIZED VIEW combined_query_data;

What PG version is this?  Can you provide a self-contained test case?

> I see no reason why the superuser would get a "permission denied"
> error.

Matview queries are run as the owner of the matview, so this isn't
as surprising as all that.  But if the matview works in your normal
usage, then pg_dump must be doing something wrong, perhaps emitting
grants in the wrong order.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore misuse or bug?

2017-07-21 Thread Jordan Gigov
So we have this database dump created using "pg_dump -Fc ourdb >
file.dump" that has a Materialized View that gets refreshed under
certain conditions by our Java web-app.

When running pg_restore as the superuser it gives the following error
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3253; 0 320628
MATERIALIZED VIEW DATA combined_query_data web_user
pg_restore: [archiver (db)] could not execute query: ERROR:
permission denied for relation first_table_in_from_list
Command was: REFRESH MATERIALIZED VIEW combined_query_data;

If we run it as "pg_restore -d ourdb file.dump" it is treated as a
warning, and the restore is successful, except for the data in the MV.
We have to run the command separately in order to fill the data.
If we run it as "pg_restore -d ourdb -1 file.dump", then it becomes a
fatal error and rolls back the transaction.

I see no reason why the superuser would get a "permission denied"
error. Is this a bug or am I doing something wrong.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore successful with warnings returns exit code of non-zero

2017-03-02 Thread Adrian Klaver

On 03/02/2017 07:39 AM, dhanuj hippie wrote:

I have a pg dump (custom format, column-inserts) which I'm restoring on
an existing DB as
" /usr/bin/pg_restore -a -v -d db1 -F c " and the input is passed in
from stdin. It warns on a couple of existing rows, but completes
successfully with "WARNING: errors ignored on restore: 5".
However, this returns a exit code of 1 eventhough the command is run
fine and data is restored. Is there a way to tell pg_restore to return 0
in this case ?


Actually the command did not run fine, it threw 5 errors. Now they maybe 
harmless errors or they may not be, that should be for the user to 
determine after examining them.




Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore successful with warnings returns exit code of non-zero

2017-03-02 Thread dhanuj hippie
I have a pg dump (custom format, column-inserts) which I'm restoring on an
existing DB as
" /usr/bin/pg_restore -a -v -d db1 -F c " and the input is passed in from
stdin. It warns on a couple of existing rows, but completes successfully
with "WARNING: errors ignored on restore: 5".
However, this returns a exit code of 1 eventhough the command is run fine
and data is restored. Is there a way to tell pg_restore to return 0 in this
case ?

Thanks


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Daniel Westermann
>It isn't consistent but it's by purpose. And there's a really good reason for 
>that behaviour. There's no issue with psql connecting to a >default database 
>because psql doesn't do anything by itself. pg_restore will do something to 
>the database it connects to. It might drop >some objects, create some, add 
>data. I want to be sure it's restored in the right database. I don't want it 
>to second-guess what I want to >do. Otherwise, I'll have a really hard time 
>fixing everything it did. So -d is required by pg_restore to connect to some 
>database, >whereas there's no big deal with psql connecting to a default 
>database. 

Ok, makes sense. Thanks all for your answers 

Regards 
Daniel 


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Adrian Klaver

On 12/21/2016 12:59 PM, Guillaume Lelarge wrote:

2016-12-21 20:29 GMT+01:00 Daniel Westermann
>:

>> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 
-F d -C -j 2 /var/tmp/exp/
>>
>> This runs fine but where does it connect to? Nothing is listening on 
port 5439.

>Given the lack of a -d switch, I'd expect it not to try to connect
>anywhere, just emit the restore script on stdout.  At least, that's
>what happens for me.  It's weird that you don't see any printout.

>(To be clear: it's -d that triggers a connection attempt in pg_restore.
>Without that, -h and -p are just noise.)

Ok, that makes sense. I got the output on screen, as mentioned.

What I would have expected is at least a hint or warning that host
and port are ignored if you do not specify the "-d" switch. Giving
port and host clearly indicates that I want to connect to what I
provided, doesn't it? psql uses the os username as default database,
pg_restore doesn't?

postgres@pgbox:/home/postgres/ [PG961] unset PGDATABASE
postgres@pgbox:/home/postgres/ [] psql
psql (9.6.1)
Type "help" for help.

(postgres@[local]:5439) [postgres] >

Providing "-d" gives a meaningful message at least:

postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p
=== -d postgres -F d -C /var/tmp/exp/
pg_restore: [archiver (db)] connection to database "postgres"
failed: invalid port number: "==="

Maybe it is only me, but this is not consistent behavior, is it?


It isn't consistent but it's by purpose. And there's a really good
reason for that behaviour. There's no issue with psql connecting to a
default database because psql doesn't do anything by itself. pg_restore


That is not entirely accurate. psql -f some_destructive_script.sql could 
ruin you day.



will do something to the database it connects to. It might drop some
objects, create some, add data. I want to be sure it's restored in the
right database. I don't want it to second-guess what I want to do.
Otherwise, I'll have a really hard time fixing everything it did. So -d
is required by pg_restore to connect to some database, whereas there's
no big deal with psql connecting to a default database.


--
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Adrian Klaver

On 12/21/2016 11:29 AM, Daniel Westermann wrote:


Providing "-d" gives a meaningful message at least:

postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p
=== -d postgres -F d -C /var/tmp/exp/
pg_restore: [archiver (db)] connection to database "postgres" failed:
invalid port number: "==="

Maybe it is only me, but this is not consistent behavior, is it?


Maybe so but that is why there is documentation for programs/commands , 
to deal with the exceptions. In this case the information is at the top 
of the pg_restore docs. Generally whenever I get an odd result I head to 
the bottom of the docs under the Notes section. This is usually where 
the exceptions are called out.





Regards
Daniel



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Guillaume Lelarge
2016-12-21 20:29 GMT+01:00 Daniel Westermann <
daniel.westerm...@dbi-services.com>:

> >> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439
> -F d -C -j 2 /var/tmp/exp/
> >>
> >> This runs fine but where does it connect to? Nothing is listening on
> port 5439.
>
> >Given the lack of a -d switch, I'd expect it not to try to connect
> >anywhere, just emit the restore script on stdout.  At least, that's
> >what happens for me.  It's weird that you don't see any printout.
>
> >(To be clear: it's -d that triggers a connection attempt in pg_restore.
> >Without that, -h and -p are just noise.)
>
> Ok, that makes sense. I got the output on screen, as mentioned.
>
> What I would have expected is at least a hint or warning that host and
> port are ignored if you do not specify the "-d" switch. Giving port and
> host clearly indicates that I want to connect to what I provided, doesn't
> it? psql uses the os username as default database, pg_restore doesn't?
>
> postgres@pgbox:/home/postgres/ [PG961] unset PGDATABASE
> postgres@pgbox:/home/postgres/ [] psql
> psql (9.6.1)
> Type "help" for help.
>
> (postgres@[local]:5439) [postgres] >
>
> Providing "-d" gives a meaningful message at least:
>
> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p ===
> -d postgres -F d -C /var/tmp/exp/
> pg_restore: [archiver (db)] connection to database "postgres" failed:
> invalid port number: "==="
>
> Maybe it is only me, but this is not consistent behavior, is it?
>
>
It isn't consistent but it's by purpose. And there's a really good reason
for that behaviour. There's no issue with psql connecting to a default
database because psql doesn't do anything by itself. pg_restore will do
something to the database it connects to. It might drop some objects,
create some, add data. I want to be sure it's restored in the right
database. I don't want it to second-guess what I want to do. Otherwise,
I'll have a really hard time fixing everything it did. So -d is required by
pg_restore to connect to some database, whereas there's no big deal with
psql connecting to a default database.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Daniel Westermann
>> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -F d 
>> -C -j 2 /var/tmp/exp/ 
>> 
>> This runs fine but where does it connect to? Nothing is listening on port 
>> 5439. 

>Given the lack of a -d switch, I'd expect it not to try to connect 
>anywhere, just emit the restore script on stdout. At least, that's 
>what happens for me. It's weird that you don't see any printout. 

>(To be clear: it's -d that triggers a connection attempt in pg_restore. 
>Without that, -h and -p are just noise.) 

Ok, that makes sense. I got the output on screen, as mentioned. 

What I would have expected is at least a hint or warning that host and port are 
ignored if you do not specify the "-d" switch. Giving port and host clearly 
indicates that I want to connect to what I provided, doesn't it? psql uses the 
os username as default database, pg_restore doesn't? 

postgres@pgbox:/home/postgres/ [PG961] unset PGDATABASE 
postgres@pgbox:/home/postgres/ [] psql 
psql (9.6.1) 
Type "help" for help. 

(postgres@[local]:5439) [postgres] > 

Providing "-d" gives a meaningful message at least: 

postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p === -d 
postgres -F d -C /var/tmp/exp/ 
pg_restore: [archiver (db)] connection to database "postgres" failed: invalid 
port number: "===" 

Maybe it is only me, but this is not consistent behavior, is it? 


Regards 
Daniel 


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Tom Lane
Daniel Westermann  writes:
> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -F d 
> -C -j 2 /var/tmp/exp/ 

> This runs fine but where does it connect to? Nothing is listening on port 
> 5439. 

Given the lack of a -d switch, I'd expect it not to try to connect
anywhere, just emit the restore script on stdout.  At least, that's
what happens for me.  It's weird that you don't see any printout.

(To be clear: it's -d that triggers a connection attempt in pg_restore.
Without that, -h and -p are just noise.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Francisco Olarte
On Wed, Dec 21, 2016 at 6:22 PM, Daniel Westermann
 wrote:
> I have a PostgreSQL 9.5.4 and a PostgreSQL 9.6.1 instance installed on the
> same host.

.
> What do I miss? I can give any port to pg_restore and it just seems to be
> fine.

, are you by chance using debian/ubuntu/any derivative? Maybe
pg_restore is not directly executed. In Ubuntu I have this:

$ type -path pg_restore
/usr/bin/pg_restore
$ file /usr/bin/pg_restore
/usr/bin/pg_restore: symbolic link to ../share/postgresql-common/pg_wrapper
$ file /usr/share/postgresql-common/pg_wrapper
/usr/share/postgresql-common/pg_wrapper: Perl script text executable

And that pg_wrapper thingie has the habit of completely fscking my
connection options / service files ( even though I do not have a
server installed, I only install teh client programs to connect to the
remote servers ).

You could check with type/file wether you have something similar.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Andreas Kretschmer



Am 21.12.2016 um 18:22 schrieb Daniel Westermann:



Now I try to import into 9.6.1 => the instance is not running but the 
environment is  set:


postgres@pgbox:/home/postgres/ [PG961] netstat -na | grep 5439
postgres@pgbox:/home/postgres/ [PG961] echo $PGPORT
5439

postgres@pgbox:/home/postgres/ [PG961] pg_restore -V
pg_restore (PostgreSQL) 9.6.1
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 
-F d -C -j 2 /var/tmp/exp/


This runs fine but where does it connect to? Nothing is listening on 
port 5439.




No, that can't run.

Andreas


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Daniel Westermann
It does

Sent from my Phone

> On 21 Dec 2016, at 18:40, Andreas Kretschmer  wrote:
> 
> 
> 
>> Am 21.12.2016 um 18:22 schrieb Daniel Westermann:
>> 
>> 
>> Now I try to import into 9.6.1 => the instance is not running but the 
>> environment is  set:
>> 
>> postgres@pgbox:/home/postgres/ [PG961] netstat -na | grep 5439
>> postgres@pgbox:/home/postgres/ [PG961] echo $PGPORT
>> 5439
>> 
>> postgres@pgbox:/home/postgres/ [PG961] pg_restore -V
>> pg_restore (PostgreSQL) 9.6.1
>> postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -F d 
>> -C -j 2 /var/tmp/exp/
>> 
>> This runs fine but where does it connect to? Nothing is listening on port 
>> 5439.
>> 
> 
> No, that can't run.
> 
> Andreas


Re: [GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Adrian Klaver

On 12/21/2016 09:22 AM, Daniel Westermann wrote:

Hi all,

I have a PostgreSQL 9.5.4 and a PostgreSQL 9.6.1 instance installed on
the same host. I dump the 9.5.4 instance with:

pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test

.. which runs fine. I get the output as expected:
postgres@pgbox:/home/postgres/ [PG954] ls /var/tmp/exp/
3016.dat.gz  3017.dat.gz  toc.dat

Source instance:
(postgres@[local]:5438) [postgres] > show port;
 port
--
 5438
(1 row)

Time: 0.328 ms
(postgres@[local]:5438) [postgres] > select version();
-[ RECORD 1
]---
version | PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC)
4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit

Now I try to import into 9.6.1 => the instance is not running but the
environment is  set:

postgres@pgbox:/home/postgres/ [PG961] netstat -na | grep 5439
postgres@pgbox:/home/postgres/ [PG961] echo $PGPORT
5439

postgres@pgbox:/home/postgres/ [PG961] pg_restore -V
pg_restore (PostgreSQL) 9.6.1
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439
-F d -C -j 2 /var/tmp/exp/

This runs fine but where does it connect to? Nothing is listening on
port 5439.


https://www.postgresql.org/docs/9.5/static/app-pgrestore.html

"pg_restore can operate in two modes. If a database name is specified, 
pg_restore connects to that database and restores archive contents 
directly into the database. Otherwise, a script containing the SQL 
commands necessary to rebuild the database is created and written to a 
file or standard output. This script output is equivalent to the plain 
text output format of pg_dump. Some of the options controlling the 
output are therefore analogous to pg_dump options."



So you can use pg_restore to restore all or part of a pg_dump (custom 
format) file to another file. I find this very handy.





postgres@pgbox:/home/postgres/ [PG961] netstat -tulpen
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address
State   User   Inode  PID/Program name
tcp0  0 0.0.0.0:22  0.0.0.0:*
LISTEN  0  15929  -
tcp0  0 127.0.0.1:250.0.0.0:*
LISTEN  0  17460  -
tcp0  0 0.0.0.0:54380.0.0.0:*
LISTEN  1000   18923  2829/postgres
tcp6   0  0 :::22   :::*
LISTEN  0  15938  -
tcp6   0  0 ::1:25  :::*
LISTEN  0  17461  -
tcp6   0  0 :::5438 :::*
LISTEN  1000   18924  2829/postgres
udp0  0 0.0.0.0:68
0.0.0.0:*   0  14940
-
udp0  0 0.0.0.0:49566
0.0.0.0:*   0  14929
-
udp6   0  0 :::40307
:::*0  14930
-

postgres@pgbox:/home/postgres/ [PG961] psql -h localhost -p 5439
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5439?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5439?

What do I miss? I can give any port to pg_restore and it just seems to
be fine. Even this seems to working (the copy from stdin is displayed on
the screen):
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p
=== -F d -C /var/tmp/exp/


Thanks
Daniel




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore to a port where nobody is listening?

2016-12-21 Thread Daniel Westermann
Hi all, 

I have a PostgreSQL 9.5.4 and a PostgreSQL 9.6.1 instance installed on the same 
host. I dump the 9.5.4 instance with: 

pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test 

.. which runs fine. I get the output as expected: 
postgres@pgbox:/home/postgres/ [PG954] ls /var/tmp/exp/ 
3016.dat.gz 3017.dat.gz toc.dat 

Source instance: 
(postgres@[local]:5438) [postgres] > show port; 
port 
-- 
5438 
(1 row) 

Time: 0.328 ms 
(postgres@[local]:5438) [postgres] > select version(); 
-[ RECORD 1 
]---
 
version | PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-4), 64-bit 

Now I try to import into 9.6.1 => the instance is not running but the 
environment is set: 

postgres@pgbox:/home/postgres/ [PG961] netstat -na | grep 5439 
postgres@pgbox:/home/postgres/ [PG961] echo $PGPORT 
5439 

postgres@pgbox:/home/postgres/ [PG961] pg_restore -V 
pg_restore (PostgreSQL) 9.6.1 
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -F d -C 
-j 2 /var/tmp/exp/ 

This runs fine but where does it connect to? Nothing is listening on port 5439. 

postgres@pgbox:/home/postgres/ [PG961] netstat -tulpen 
(Not all processes could be identified, non-owned process info 
will not be shown, you would have to be root to see it all.) 
Active Internet connections (only servers) 
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program 
name 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 15929 - 
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 17460 - 
tcp 0 0 0.0.0.0:5438 0.0.0.0:* LISTEN 1000 18923 2829/postgres 
tcp6 0 0 :::22 :::* LISTEN 0 15938 - 
tcp6 0 0 ::1:25 :::* LISTEN 0 17461 - 
tcp6 0 0 :::5438 :::* LISTEN 1000 18924 2829/postgres 
udp 0 0 0.0.0.0:68 0.0.0.0:* 0 14940 - 
udp 0 0 0.0.0.0:49566 0.0.0.0:* 0 14929 - 
udp6 0 0 :::40307 :::* 0 14930 - 

postgres@pgbox:/home/postgres/ [PG961] psql -h localhost -p 5439 
psql: could not connect to server: Connection refused 
Is the server running on host "localhost" (::1) and accepting 
TCP/IP connections on port 5439? 
could not connect to server: Connection refused 
Is the server running on host "localhost" (127.0.0.1) and accepting 
TCP/IP connections on port 5439? 

What do I miss? I can give any port to pg_restore and it just seems to be fine. 
Even this seems to working (the copy from stdin is displayed on the screen): 
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p === -F d 
-C /var/tmp/exp/ 


Thanks 
Daniel 



Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-17 Thread Arnaud L.

Le 16/11/2016 à 20:05, Tom Lane a écrit :

Arnaud Lesauvage  writes:

[ dump from problematic database ]


OK, thanks for the test case.  The problem here is that pg_dump is setting
up a circular dependency that it doesn't know how to break correctly.
You've got a couple of views that are implicitly dependent on the primary
keys of their underlying tables, because they use a GROUP BY the primary
key without also grouping by other columns they use post-grouping.  That
means that pg_dump has to dump the view definition after the creation of
the primary key, but it also needs to put the view out sooner than that
for other reasons.  It manages to deal with that okay in the default mode,
but when you have --clean in there, it ends up generating an illegal DROP
RULE command.


All right, at least I'm glad that I did not miss something obvious.



This is something we ought to fix, but it's not exactly trivial to do.
In the meantime I'd suggest changing the view definitions to not assume
that the underlying tables have primary keys.  It looks like in
view_temp_export_geo_recherche_extra_sites_projets you need to add
c.official_language_id to the GROUP BY, and similarly in
view_temp_export_geo_recherche_offtrad_sites.


Thanks for the tip ! I'll try this ASAP.
I never "GROUP BY" primary keys only, so I can consider this as an error 
that needs fixing. I did not even know that this was valid SQL to be honest.


Thanks a lot for your help !

Regards
--
Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-16 Thread Tom Lane
Arnaud Lesauvage  writes:
> [ dump from problematic database ]

OK, thanks for the test case.  The problem here is that pg_dump is setting
up a circular dependency that it doesn't know how to break correctly.
You've got a couple of views that are implicitly dependent on the primary
keys of their underlying tables, because they use a GROUP BY the primary
key without also grouping by other columns they use post-grouping.  That
means that pg_dump has to dump the view definition after the creation of
the primary key, but it also needs to put the view out sooner than that
for other reasons.  It manages to deal with that okay in the default mode,
but when you have --clean in there, it ends up generating an illegal DROP
RULE command.

This is something we ought to fix, but it's not exactly trivial to do.
In the meantime I'd suggest changing the view definitions to not assume
that the underlying tables have primary keys.  It looks like in
view_temp_export_geo_recherche_extra_sites_projets you need to add
c.official_language_id to the GROUP BY, and similarly in
view_temp_export_geo_recherche_offtrad_sites.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Adrian Klaver

On 11/15/2016 08:09 AM, Arnaud L. wrote:

Le 15/11/2016 à 16:44, Tom Lane a écrit :

You'd have to provide a lot more detail before anyone could tell if there
was a fixable bug here, but I rather doubt it.  There are at least two
ways this scenario might lose:

1. There are additional objects in the target database that have
dependencies on ones that are in the dump.  In that case there is
no ordering of dropping the objects in the dump that will succeed.


The target databased was restored from the dump in an empty database.
So what I do is :
dropdb -U postgres -h localhost db1
createdb -U postgres -h localhost db1
psql -U postgres -h localhost -c "CREATE EXTENSION postgis" db1
pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f
"D:\temp.dump" db1
pg_restore -U postgres -h localhost -j 2 -d db1 "D:\temp.dump"

That works, no error, no warning.

Then
pg_restore -U postgres -h localhost --clean -d db1 "D:\temp.dump"
fails.

So can we exclude additional objects in the target database in this case ?



2. There are dependency chains passing through objects that weren't
dumped (ie, if A depends on B which depends on C, and you omit B
from the dump, it might still be the case that A can't be restored
before C).


Can I trust what pgadmin says about objects dependent on a schema ?


Believe that only shows objects that have are declared for that schema. 
It does not show internal relationships of the objects to other objects 
outside their schema. In other words a function that is public.some_fnc 
but inside the function body operates on objects in another schema. Or a 
table in one schema that has a FK to a table in another schema and so on.



It says that public schema's dependent objects are only it's own
operators, functions, etc. (i.e., what's in the postgis extension), and
the same for the other two schemas.
They don't show any dependent objects outside themselves.



If you think neither of those cases apply, please provide a
self-contained
test case.


That's not going to be easy. I'll try to trim down a pg_dump -s to see
how I can reproduce this.

--
Arnaud







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Adrian Klaver

On 11/15/2016 07:55 AM, Arnaud L. wrote:

Le 15/11/2016 à 16:21, Adrian Klaver a écrit :

The restore fails on a lot of statements, complaining about
dependencies.
For instance, "cannot drop rule _RETURN on view myview1 because view
myview1requires it". Or "cannot drop constraint mypkey on table my table
because other objects depend on it [list of foreign keys]".


So where are the above objects, eg what schema?


They are in a schema that is included in the dump (more precisely, not
excluded from).
It happens in all schemas, not a specific one.


pg_restore  -l  "D:\db.dump"
to see what pg_restore thinks is the order of restore.


OK I did that.
The output is pretty big.
The second error is a consequence of the first one (they were not
foreign keys but views, sorry for the mistake).
So the very first error from pg_restore, the "cannot drop rule
_return..." is TOC 11330.
It is on the line 4948 in the pg_restore -l output (does this order
matter, or is it the TOC ?).


The line order:

https://www.postgresql.org/docs/9.5/static/app-pgrestore.htm

"The listing file consists of a header and one line for each item, e.g.:



Semicolons start a comment, and the numbers at the start of lines refer 
to the internal archive ID assigned to each item.


Lines in the file can be commented out, deleted, and reordered. For example:
..
"



The view is number 1467 in the TOC, and is on line 2353 of the
pg_restore -l output.


--
Arnaud




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Tom Lane
"Arnaud L."  writes:
> The target databased was restored from the dump in an empty database.
> So what I do is :
> dropdb -U postgres -h localhost db1
> createdb -U postgres -h localhost db1
> psql -U postgres -h localhost -c "CREATE EXTENSION postgis" db1
> pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f 
> "D:\temp.dump" db1
> pg_restore -U postgres -h localhost -j 2 -d db1 "D:\temp.dump"

> That works, no error, no warning.

> Then
> pg_restore -U postgres -h localhost --clean -d db1 "D:\temp.dump"
> fails.

[ squint... ]  That does look like it ought to work.  I wonder if postgis
is doing something weird?  But again, it's impossible to debug this at
this level of detail.  If you can make a test case I'd be happy to look
into it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Arnaud L.

Le 15/11/2016 à 16:44, Tom Lane a écrit :

You'd have to provide a lot more detail before anyone could tell if there
was a fixable bug here, but I rather doubt it.  There are at least two
ways this scenario might lose:

1. There are additional objects in the target database that have
dependencies on ones that are in the dump.  In that case there is
no ordering of dropping the objects in the dump that will succeed.


The target databased was restored from the dump in an empty database.
So what I do is :
dropdb -U postgres -h localhost db1
createdb -U postgres -h localhost db1
psql -U postgres -h localhost -c "CREATE EXTENSION postgis" db1
pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f 
"D:\temp.dump" db1

pg_restore -U postgres -h localhost -j 2 -d db1 "D:\temp.dump"

That works, no error, no warning.

Then
pg_restore -U postgres -h localhost --clean -d db1 "D:\temp.dump"
fails.

So can we exclude additional objects in the target database in this case ?



2. There are dependency chains passing through objects that weren't
dumped (ie, if A depends on B which depends on C, and you omit B
from the dump, it might still be the case that A can't be restored
before C).


Can I trust what pgadmin says about objects dependent on a schema ?
It says that public schema's dependent objects are only it's own 
operators, functions, etc. (i.e., what's in the postgis extension), and 
the same for the other two schemas.

They don't show any dependent objects outside themselves.



If you think neither of those cases apply, please provide a self-contained
test case.


That's not going to be easy. I'll try to trim down a pg_dump -s to see 
how I can reproduce this.


--
Arnaud




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Arnaud L.

Le 15/11/2016 à 16:21, Adrian Klaver a écrit :

The restore fails on a lot of statements, complaining about dependencies.
For instance, "cannot drop rule _RETURN on view myview1 because view
myview1requires it". Or "cannot drop constraint mypkey on table my table
because other objects depend on it [list of foreign keys]".


So where are the above objects, eg what schema?


They are in a schema that is included in the dump (more precisely, not 
excluded from).

It happens in all schemas, not a specific one.


pg_restore  -l  "D:\db.dump"
to see what pg_restore thinks is the order of restore.


OK I did that.
The output is pretty big.
The second error is a consequence of the first one (they were not 
foreign keys but views, sorry for the mistake).
So the very first error from pg_restore, the "cannot drop rule 
_return..." is TOC 11330.
It is on the line 4948 in the pg_restore -l output (does this order 
matter, or is it the TOC ?).
The view is number 1467 in the TOC, and is on line 2353 of the 
pg_restore -l output.



--
Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Tom Lane
"Arnaud L."  writes:
> Hi am making daily backups of a database and omitting two schemas from 
> the dump (and the public schema). Those schemas are read-only and have 
> no dependancy with the rest of the database.
> My command is :
> pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f 
> "D:\db.dump" db1

> Now, I would like to restore this dump in a database where the 
> aforementioned schemas have been loaded. So i would like to do a 
> pg_restore --clean, in order two preserve those two schemas (and the 
> public one), and to restore everything else.
> The restore fails on a lot of statements, complaining about dependencies.
> For instance, "cannot drop rule _RETURN on view myview1 because view 
> myview1requires it". Or "cannot drop constraint mypkey on table my table 
> because other objects depend on it [list of foreign keys]".

You'd have to provide a lot more detail before anyone could tell if there
was a fixable bug here, but I rather doubt it.  There are at least two
ways this scenario might lose:

1. There are additional objects in the target database that have
dependencies on ones that are in the dump.  In that case there is
no ordering of dropping the objects in the dump that will succeed.

2. There are dependency chains passing through objects that weren't
dumped (ie, if A depends on B which depends on C, and you omit B
from the dump, it might still be the case that A can't be restored
before C).

If you think neither of those cases apply, please provide a self-contained
test case.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Adrian Klaver

On 11/15/2016 07:11 AM, Arnaud L. wrote:

Hi all

Postgresql 9.3.14 on Windows.
Hi am making daily backups of a database and omitting two schemas from
the dump (and the public schema). Those schemas are read-only and have
no dependancy with the rest of the database.
My command is :
pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f
"D:\db.dump" db1

(I also omit "public" because we use postgis, so it's cleaner for me to
dump without the public schema, then start from an empty database and do
a "create extension postgis" before restoring)

I can successfully restore this dump without any error in an empty
database.

Now, I would like to restore this dump in a database where the
aforementioned schemas have been loaded. So i would like to do a
pg_restore --clean, in order two preserve those two schemas (and the
public one), and to restore everything else.
The restore fails on a lot of statements, complaining about dependencies.
For instance, "cannot drop rule _RETURN on view myview1 because view
myview1requires it". Or "cannot drop constraint mypkey on table my table
because other objects depend on it [list of foreign keys]".


So where are the above objects, eg what schema?



My command is :
pg_restore -U postgres -h pgsql1 --clean -d db1 "D:\db.dump"

I thought that pg_restore was supposed to reorder the objects to honor
dependencies ? How can I get this right ?


To start:

pg_restore  -l  "D:\db.dump"

to see what pg_restore thinks is the order of restore.




Thanks for your help !
Cheers

--
Arnaud







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore --clean failing due to dependancies

2016-11-15 Thread Arnaud L.

Hi all

Postgresql 9.3.14 on Windows.
Hi am making daily backups of a database and omitting two schemas from 
the dump (and the public schema). Those schemas are read-only and have 
no dependancy with the rest of the database.

My command is :
pg_dump -h pgsql1 -U postgres -b -Fc -E UTF8 -N public -N osm -N osm2 -f 
"D:\db.dump" db1


(I also omit "public" because we use postgis, so it's cleaner for me to 
dump without the public schema, then start from an empty database and do 
a "create extension postgis" before restoring)


I can successfully restore this dump without any error in an empty database.

Now, I would like to restore this dump in a database where the 
aforementioned schemas have been loaded. So i would like to do a 
pg_restore --clean, in order two preserve those two schemas (and the 
public one), and to restore everything else.

The restore fails on a lot of statements, complaining about dependencies.
For instance, "cannot drop rule _RETURN on view myview1 because view 
myview1requires it". Or "cannot drop constraint mypkey on table my table 
because other objects depend on it [list of foreign keys]".


My command is :
pg_restore -U postgres -h pgsql1 --clean -d db1 "D:\db.dump"

I thought that pg_restore was supposed to reorder the objects to honor 
dependencies ? How can I get this right ?


Thanks for your help !
Cheers

--
Arnaud




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore question

2016-09-19 Thread Adrian Klaver

On 09/19/2016 01:55 PM, kbran...@pwhome.com wrote:

On 09/19/2016 01:06 PM, kbrannen(at)pwhome(dot)com wrote:

--- adrian(dot)klaver(at)aklaver(dot)com wrote:

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: kbrannen(at)pwhome(dot)com, pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700

On 09/19/2016 11:46 AM, kbrannen(at)pwhome(dot)com wrote:


But if I dump using:
pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
help:
pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
Command was: CREATE TYPE app_kinds AS ENUM (



Second, did it actually fail or did it just throw the error and keep on
going?


So changes my process to:

# create backup just in case
echo "alter schema public rename to save; create schema public;" | psql
pg_restore --dbname=nms --schema=public -j3 .

It still shows all the stuff below (from the original email) and a lot more 
ending with:

WARNING: errors ignored on restore: 18


Meant to add to previous post. If any of the errors are of the 'objects
does not exist' variety you can get rid of then using:

--if-exists

Use conditional commands (i.e. add an IF EXISTS clause) when
cleaning database objects. This option is not valid unless --clean is
also specified.


By doing it 1 command at a time with lots of compares, I think I'm starting to
understand what's going on.

Documenting this for those who are curious and to help others in the future...

Back to the part where I have the data spread over 2 schemas... Note that I'm
trying to backup only 1 at a time.  The reason for doing this is because the
important data is in the public schema, while the other schema is for logging
data and will be 100's of times bigger (we do back this up but not as often
because losing a little is not fatal).

Anyway, when I change the public schema to a new name, the tables in the
logging schema that use types from public automatically change, e.g. from
public.call_types to savepublic.call_types. That also means that if I were do
"drop schema public cascade", then the logging table definitions gets changed 
as the
column is dropped. Yikes! (Sadly, I should have realized this before but 
didn't.)

But that's why I'm getting the errors, because when I try to restore the public
schema, our custom types do already exist.

To your suggestion of added "--if-exists --clean", that does remove 4 errors,
leaving me with 14 that are true errors.

My take away from this is that if we have schemas that are dependent on each
other, we can't backup/restore just 1. That means we must break the dependency
or else back them both up all the time (not desirable to do this frequently
because of size). Sigh...

I'll go back and look at what it would take to break the dependencies, but any
other suggestions would be welcome.


One possible solution is to:

1) Do the pg_dump in custom format.

2) Do pg_restore with --schema-only to a file. Open file and change the 
search_path to include the logging schema.


3) Pass the schema only plain text file into psql to get the schema 
objects created.


4) pg_restore using:

-a
--data-only

Restore only the data, not the schema (data definitions). Table 
data, large objects, and sequence values are restored, if present in the 
archive.


This option is similar to, but for historical reasons not identical 
to, specifying --section=data.





Thanks for the advice and pointers Adrian!

Kevin





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore question

2016-09-19 Thread
>On 09/19/2016 01:06 PM, kbrannen(at)pwhome(dot)com wrote:
>>> --- adrian(dot)klaver(at)aklaver(dot)com wrote:
>>>
>>> From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
>>> To: kbrannen(at)pwhome(dot)com, pgsql-general(at)postgresql(dot)org
>>> Subject: Re: [GENERAL] pg_restore question
>>> Date: Mon, 19 Sep 2016 12:46:24 -0700
>>>
>>> On 09/19/2016 11:46 AM, kbrannen(at)pwhome(dot)com wrote:
>>>>
>>>> But if I dump using:
>>>> pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
>>>> then the restore with with the schema still there and relying on --clean 
>>>> to help:
>>>> pg_restore --dbname=nms --clean --create --schema=public .
>>>> will fail with:
>>>>
>>>> pg_restore: [archiver (db)] Error while PROCESSING TOC:
>>>> pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 
>>>> TYPE app_kinds nmsroot
>>>> pg_restore: [archiver (db)] could not execute query: ERROR:  type 
>>>> "app_kinds" already exists
>>>> Command was: CREATE TYPE app_kinds AS ENUM (
>>>
>>>
>>> Second, did it actually fail or did it just throw the error and keep on
>>> going?
>>
>> So changes my process to:
>>
>> # create backup just in case
>> echo "alter schema public rename to save; create schema public;" | psql
>> pg_restore --dbname=nms --schema=public -j3 .
>>
>> It still shows all the stuff below (from the original email) and a lot more 
>> ending with:
>>
>> WARNING: errors ignored on restore: 18
>
>Meant to add to previous post. If any of the errors are of the 'objects 
>does not exist' variety you can get rid of then using:
>
>--if-exists
>
> Use conditional commands (i.e. add an IF EXISTS clause) when 
>cleaning database objects. This option is not valid unless --clean is 
>also specified.

By doing it 1 command at a time with lots of compares, I think I'm starting to
understand what's going on.

Documenting this for those who are curious and to help others in the future...

Back to the part where I have the data spread over 2 schemas... Note that I'm
trying to backup only 1 at a time.  The reason for doing this is because the
important data is in the public schema, while the other schema is for logging
data and will be 100's of times bigger (we do back this up but not as often
because losing a little is not fatal).

Anyway, when I change the public schema to a new name, the tables in the
logging schema that use types from public automatically change, e.g. from
public.call_types to savepublic.call_types. That also means that if I were do
"drop schema public cascade", then the logging table definitions gets changed 
as the
column is dropped. Yikes! (Sadly, I should have realized this before but 
didn't.)

But that's why I'm getting the errors, because when I try to restore the public
schema, our custom types do already exist.

To your suggestion of added "--if-exists --clean", that does remove 4 errors,
leaving me with 14 that are true errors.

My take away from this is that if we have schemas that are dependent on each
other, we can't backup/restore just 1. That means we must break the dependency
or else back them both up all the time (not desirable to do this frequently
because of size). Sigh...

I'll go back and look at what it would take to break the dependencies, but any
other suggestions would be welcome.

Thanks for the advice and pointers Adrian!

Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore question

2016-09-19 Thread Adrian Klaver

On 09/19/2016 01:06 PM, kbran...@pwhome.com wrote:

--- adrian.kla...@aklaver.com wrote:

From: Adrian Klaver <adrian.kla...@aklaver.com>
To: kbran...@pwhome.com, pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700

On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:

I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.

It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after "drop schema public cascade") with "psql nms < dump.sql"
will create the schema and it loads correctly.

But if I dump using:
pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
help:
pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
Command was: CREATE TYPE app_kinds AS ENUM (


First the --create  is a no-op as it only applies to the database as a
whole:

https://www.postgresql.org/docs/9.5/static/app-pgrestore.html

--create

 Create the database before restoring into it. If --clean is also
specified, drop and recreate the target database before connecting to it.


OK, we'll chalk that one up to "reading comprehension failure" on my part. :)
I'll drop that option especially because it's easy to work around.



Second, did it actually fail or did it just throw the error and keep on
going?


So changes my process to:

# create backup just in case
echo "alter schema public rename to save; create schema public;" | psql
pg_restore --dbname=nms --schema=public -j3 .

It still shows all the stuff below (from the original email) and a lot more 
ending with:

WARNING: errors ignored on restore: 18


Meant to add to previous post. If any of the errors are of the 'objects 
does not exist' variety you can get rid of then using:


--if-exists

Use conditional commands (i.e. add an IF EXISTS clause) when 
cleaning database objects. This option is not valid unless --clean is 
also specified.





I'm sure you can see how that might alarm me. :)

The more I read about search_path and schemas, the more I'm thinking the issue 
is related to that.
I just haven't figured out how yet nor what to do about it.

Kevin


...

But if I drop the schema first AND create a blank schema (leaving of the create
gives me yet a 3rd set of errors), then I get a 2nd set of errors:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 266; 1259 148562 VIEW 
busy_log_view nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  column 
busy_log.call_type does not exist
LINE 12: WHEN (busy_log.call_type = 'U'::call_types) THEN...
   ^
Command was: CREATE VIEW busy_log_view AS
 SELECT busy_log.busy_log_pk,
busy_log.time_sent,
busy_log.source_id,
busy_log.targ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"busy_log_view" does not exist
Command was: ALTER TABLE busy_log_view OWNER TO nmsroot;
...

Here, it seems like the view is getting created too early, and that's with me
leaving the -j flag off, which I want to add.

What parts of the docs am I not understanding or what flags am I missing?

The 2nd attempt and 2nd set of errors is the closest to working and I'm starting
to think that this is a "search_path" issue. There is a 2nd schema (called
"logging") which has log tables while the call types are in public (and the
type is used in both schemas). This works normally because the search_path
includes both schemas. Before the dump I see:

nms=# show search_path;
   search_path
--
 "$user", public, logging
(1 row)

But in the "format=p" file, I see:

SET search_path = public, pg_catalog;

Is it possible the database's search_path isn't being used during the restore
but the incorrect one in the dump file is?
Note, the database was never dropped (just the schema), so its search path was
(should be) correct.

I did find a discussion about backup/restore and search_path from back in 2006
that makes me suspect the search_path even more, but if that's it, I don't
understand why the backup would put an invalid search_path in the backup file
nor what I might be able to do about that.

Thanks,
Kevin

---

Don't think this matters, but to be complete, this is on Centos 6.7. Pg was
compiled from source since the default Centos pack

Re: [GENERAL] pg_restore question

2016-09-19 Thread Adrian Klaver

On 09/19/2016 01:06 PM, kbran...@pwhome.com wrote:

--- adrian.kla...@aklaver.com wrote:

From: Adrian Klaver <adrian.kla...@aklaver.com>
To: kbran...@pwhome.com, pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700

On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:

I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.

It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after "drop schema public cascade") with "psql nms < dump.sql"
will create the schema and it loads correctly.

But if I dump using:
pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
help:
pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
Command was: CREATE TYPE app_kinds AS ENUM (


First the --create  is a no-op as it only applies to the database as a
whole:

https://www.postgresql.org/docs/9.5/static/app-pgrestore.html

--create

 Create the database before restoring into it. If --clean is also
specified, drop and recreate the target database before connecting to it.


OK, we'll chalk that one up to "reading comprehension failure" on my part. :)
I'll drop that option especially because it's easy to work around.



Second, did it actually fail or did it just throw the error and keep on
going?


So changes my process to:

# create backup just in case
echo "alter schema public rename to save; create schema public;" | psql
pg_restore --dbname=nms --schema=public -j3 .


If you want to see what is going on you can change the above to:

 pg_restore  --schema=public  --schema-only -f text_restore.sql

This will output the restore to plain text form in the file 
text_restore.sql. I added the --schema-only to filter out the data and 
make things a little easier to read. The -j option is a no-op when 
outputting to a file so I left it off.


Then you can compare the contents of the file to your original text dump.



It still shows all the stuff below (from the original email) and a lot more 
ending with:

WARNING: errors ignored on restore: 18

I'm sure you can see how that might alarm me. :)

The more I read about search_path and schemas, the more I'm thinking the issue 
is related to that.
I just haven't figured out how yet nor what to do about it.

Kevin


...

But if I drop the schema first AND create a blank schema (leaving of the create
gives me yet a 3rd set of errors), then I get a 2nd set of errors:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 266; 1259 148562 VIEW 
busy_log_view nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  column 
busy_log.call_type does not exist
LINE 12: WHEN (busy_log.call_type = 'U'::call_types) THEN...
   ^
Command was: CREATE VIEW busy_log_view AS
 SELECT busy_log.busy_log_pk,
busy_log.time_sent,
busy_log.source_id,
busy_log.targ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"busy_log_view" does not exist
Command was: ALTER TABLE busy_log_view OWNER TO nmsroot;
...

Here, it seems like the view is getting created too early, and that's with me
leaving the -j flag off, which I want to add.

What parts of the docs am I not understanding or what flags am I missing?

The 2nd attempt and 2nd set of errors is the closest to working and I'm starting
to think that this is a "search_path" issue. There is a 2nd schema (called
"logging") which has log tables while the call types are in public (and the
type is used in both schemas). This works normally because the search_path
includes both schemas. Before the dump I see:

nms=# show search_path;
   search_path
--
 "$user", public, logging
(1 row)

But in the "format=p" file, I see:

SET search_path = public, pg_catalog;

Is it possible the database's search_path isn't being used during the restore
but the incorrect one in the dump file is?
Note, the database was never dropped (just the schema), so its search path was
(should be) correct.

I did find a discussion about backup/restore and search_path from back in 2006
that makes me suspect the search_path even more, but if that's it, I don't
understand why the backup would put an invalid search_path in the backup file
nor what I might be able to do about that

Re: [GENERAL] pg_restore question

2016-09-19 Thread
> --- adrian.kla...@aklaver.com wrote:
> 
> From: Adrian Klaver <adrian.kla...@aklaver.com>
> To: kbran...@pwhome.com, pgsql-general@postgresql.org
> Subject: Re: [GENERAL] pg_restore question
> Date: Mon, 19 Sep 2016 12:46:24 -0700
> 
> On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:
> > I think I'm going to need some help in understanding a couple of restore 
> > issues.
> > This is for Pg 9.5.1.
> >
> > It seems that if I create a dump using
> > pg_dump --clean --create --format=p --dbname=nms --schema=public > 
> > dump.sql
> > then the restore (after "drop schema public cascade") with "psql nms < 
> > dump.sql"
> > will create the schema and it loads correctly.
> >
> > But if I dump using:
> > pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
> > then the restore with with the schema still there and relying on --clean to 
> > help:
> > pg_restore --dbname=nms --clean --create --schema=public .
> > will fail with:
> >
> > pg_restore: [archiver (db)] Error while PROCESSING TOC:
> > pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
> > app_kinds nmsroot
> > pg_restore: [archiver (db)] could not execute query: ERROR:  type 
> > "app_kinds" already exists
> > Command was: CREATE TYPE app_kinds AS ENUM (
> 
> First the --create  is a no-op as it only applies to the database as a 
> whole:
> 
> https://www.postgresql.org/docs/9.5/static/app-pgrestore.html
> 
> --create
> 
>  Create the database before restoring into it. If --clean is also 
> specified, drop and recreate the target database before connecting to it.

OK, we'll chalk that one up to "reading comprehension failure" on my part. :)
I'll drop that option especially because it's easy to work around.

> 
> Second, did it actually fail or did it just throw the error and keep on 
> going?

So changes my process to:

# create backup just in case
echo "alter schema public rename to save; create schema public;" | psql
pg_restore --dbname=nms --schema=public -j3 .

It still shows all the stuff below (from the original email) and a lot more 
ending with:

WARNING: errors ignored on restore: 18

I'm sure you can see how that might alarm me. :)

The more I read about search_path and schemas, the more I'm thinking the issue 
is related to that.
I just haven't figured out how yet nor what to do about it.

Kevin

> > ...
> >
> > But if I drop the schema first AND create a blank schema (leaving of the 
> > create
> > gives me yet a 3rd set of errors), then I get a 2nd set of errors:
> >
> > pg_restore: [archiver (db)] Error while PROCESSING TOC:
> > pg_restore: [archiver (db)] Error from TOC entry 266; 1259 148562 VIEW 
> > busy_log_view nmsroot
> > pg_restore: [archiver (db)] could not execute query: ERROR:  column 
> > busy_log.call_type does not exist
> > LINE 12: WHEN (busy_log.call_type = 'U'::call_types) THEN...
> >^
> > Command was: CREATE VIEW busy_log_view AS
> >  SELECT busy_log.busy_log_pk,
> > busy_log.time_sent,
> > busy_log.source_id,
> > busy_log.targ...
> > pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
> > "busy_log_view" does not exist
> > Command was: ALTER TABLE busy_log_view OWNER TO nmsroot;
> > ...
> >
> > Here, it seems like the view is getting created too early, and that's with 
> > me
> > leaving the -j flag off, which I want to add.
> >
> > What parts of the docs am I not understanding or what flags am I missing?
> >
> > The 2nd attempt and 2nd set of errors is the closest to working and I'm 
> > starting
> > to think that this is a "search_path" issue. There is a 2nd schema (called
> > "logging") which has log tables while the call types are in public (and the
> > type is used in both schemas). This works normally because the search_path
> > includes both schemas. Before the dump I see:
> >
> > nms=# show search_path;
> >search_path
> > --
> >  "$user", public, logging
> > (1 row)
> >
> > But in the "format=p" file, I see:
> >
> > SET search_path = public, pg_catalog;
> >
> > Is it possible the database's search_path isn't being used during the 
> > restore
> > but the incorrect one in the dump file is?
> > Note, the dat

Re: [GENERAL] pg_restore question

2016-09-19 Thread Adrian Klaver

On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:

I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.

It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after "drop schema public cascade") with "psql nms < dump.sql"
will create the schema and it loads correctly.

But if I dump using:
pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
help:
pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
Command was: CREATE TYPE app_kinds AS ENUM (


First the --create  is a no-op as it only applies to the database as a 
whole:


https://www.postgresql.org/docs/9.5/static/app-pgrestore.html

--create

Create the database before restoring into it. If --clean is also 
specified, drop and recreate the target database before connecting to it.



Second, did it actually fail or did it just throw the error and keep on 
going?






...

But if I drop the schema first AND create a blank schema (leaving of the create
gives me yet a 3rd set of errors), then I get a 2nd set of errors:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 266; 1259 148562 VIEW 
busy_log_view nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  column 
busy_log.call_type does not exist
LINE 12: WHEN (busy_log.call_type = 'U'::call_types) THEN...
   ^
Command was: CREATE VIEW busy_log_view AS
 SELECT busy_log.busy_log_pk,
busy_log.time_sent,
busy_log.source_id,
busy_log.targ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"busy_log_view" does not exist
Command was: ALTER TABLE busy_log_view OWNER TO nmsroot;
...

Here, it seems like the view is getting created too early, and that's with me
leaving the -j flag off, which I want to add.

What parts of the docs am I not understanding or what flags am I missing?

The 2nd attempt and 2nd set of errors is the closest to working and I'm starting
to think that this is a "search_path" issue. There is a 2nd schema (called
"logging") which has log tables while the call types are in public (and the
type is used in both schemas). This works normally because the search_path
includes both schemas. Before the dump I see:

nms=# show search_path;
   search_path
--
 "$user", public, logging
(1 row)

But in the "format=p" file, I see:

SET search_path = public, pg_catalog;

Is it possible the database's search_path isn't being used during the restore
but the incorrect one in the dump file is?
Note, the database was never dropped (just the schema), so its search path was
(should be) correct.

I did find a discussion about backup/restore and search_path from back in 2006
that makes me suspect the search_path even more, but if that's it, I don't
understand why the backup would put an invalid search_path in the backup file
nor what I might be able to do about that.

Thanks,
Kevin

---

Don't think this matters, but to be complete, this is on Centos 6.7. Pg was
compiled from source since the default Centos package would be version 8.4.20 
(very old).





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore question

2016-09-19 Thread
I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.

It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after "drop schema public cascade") with "psql nms < dump.sql"
will create the schema and it loads correctly.

But if I dump using:
pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
help:
pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
Command was: CREATE TYPE app_kinds AS ENUM (
...

But if I drop the schema first AND create a blank schema (leaving of the create
gives me yet a 3rd set of errors), then I get a 2nd set of errors:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 266; 1259 148562 VIEW 
busy_log_view nmsroot
pg_restore: [archiver (db)] could not execute query: ERROR:  column 
busy_log.call_type does not exist
LINE 12: WHEN (busy_log.call_type = 'U'::call_types) THEN...
   ^
Command was: CREATE VIEW busy_log_view AS
 SELECT busy_log.busy_log_pk,
busy_log.time_sent,
busy_log.source_id,
busy_log.targ...
pg_restore: [archiver (db)] could not execute query: ERROR:  relation 
"busy_log_view" does not exist
Command was: ALTER TABLE busy_log_view OWNER TO nmsroot;
...

Here, it seems like the view is getting created too early, and that's with me
leaving the -j flag off, which I want to add.

What parts of the docs am I not understanding or what flags am I missing?

The 2nd attempt and 2nd set of errors is the closest to working and I'm starting
to think that this is a "search_path" issue. There is a 2nd schema (called
"logging") which has log tables while the call types are in public (and the
type is used in both schemas). This works normally because the search_path
includes both schemas. Before the dump I see:

nms=# show search_path;
   search_path
--
 "$user", public, logging
(1 row)

But in the "format=p" file, I see:

SET search_path = public, pg_catalog;

Is it possible the database's search_path isn't being used during the restore
but the incorrect one in the dump file is?
Note, the database was never dropped (just the schema), so its search path was
(should be) correct.

I did find a discussion about backup/restore and search_path from back in 2006
that makes me suspect the search_path even more, but if that's it, I don't
understand why the backup would put an invalid search_path in the backup file
nor what I might be able to do about that.

Thanks,
Kevin

---

Don't think this matters, but to be complete, this is on Centos 6.7. Pg was
compiled from source since the default Centos package would be version 8.4.20 
(very old).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-08-02 Thread Miguel Ramos


Greetings,

To all those who helped out with this problem, I'm sorry it took me so
long to respond. For the record, the matter is solved, at least for us,
but we had to repeat a lot of things to make sure.


First, the "out of memory" problem repeated itself when restoring that
single table, after doing a schema-only restore and removing all
foreign key constraints.

Second, the other weirdness on this table was a couple of constraints
to make sure that the three arrays on each row were of the same length.
But that was not the problem.


It turned out that the backup file was the problem.

Plus, I'm convinced that this was our mistake, someone simply didn't
see an error message during backup or some copy of the file.
Also, we should have already repeated the backup and maybe we didn't.


We did a new custom dump from within latest pgadmin III, and that one
restored just fine (using the same old 9.1.8 pg_restore).

It could have been a bug in pg_dump 9.1.8 since the good backup was
from a newer version. But this didn't seem so likely and because
repeating a full backup would force us to keep the original database on
the server for a few more days (dumps take so long) we didn't do it.

We did however a dump of that single table with pg_dump 9.1.8 and that
one also restored just fine.



Our immediate practical problem is over.
The only complaint would be that the "out of memory" message is
unfriendly.
If you would find useful that we make some additional tests, or some
observation of the file, we would be glad to return the help.


Best Regards,


-- 
Miguel Ramos



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-08-02 Thread Miguel Ramos

Greetings,

To all those who helped out with this problem, I'm sorry it took me so
long to respond. For the record, the matter is solved, at least for us,
but we had to repeat a lot of things to make sure.


First, the "out of memory" problem repeated itself when restoring that
single table, after doing a schema-only restore and removing all
foreign key constraints.

Second, the other weirdness on this table was a couple of constraints
to make sure that the three arrays on each row were of the same length.
But that was not the problem.


It turned out that the backup file was the problem.

Plus, I'm convinced that this was our mistake, someone simply didn't
see an error message during backup or some copy of the file.
Also, we should have already repeated the backup and maybe we didn't.


We did a new custom dump from within latest pgadmin III, and that one
restored just fine (using the same old 9.1.8 pg_restore).

It could have been a bug in pg_dump 9.1.8 since the good backup was
from a newer version. But this didn't seem so likely and because
repeating a full backup would force us to keep the original database on
the server for a few more days (dumps take so long) we didn't do it.

We did however a dump of that single table with pg_dump 9.1.8 and that
one also restored just fine.



Our immediate practical problem is over.
The only complaint would be that the "out of memory" message is
unfriendly.
If you would find useful that we make some additional tests, or some
observation of the file, we would be glad to return the help.


Best Regards,

--
Miguel Ramos


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-19 Thread Miguel Ramos

A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:
> 
> A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> > 
> > I wrote:
> > > 
> > > I'm still suspicious that this might be some sort of NOTICE-
> > > processing-
> > > related buffer bloat.  Could you try loading the data with the
> > > server's
> > > log_min_messages level cranked down to NOTICE, so you can see
> > > from
> > > the
> > > postmaster log whether any NOTICEs are being issued to the
> > > pg_restore
> > > session?
> > 
> > BTW, I experimented with that theory by creating a table with a
> > BEFORE
> > INSERT trigger function that emits a NOTICE, and then making
> > pg_restore
> > restore a lot of data into it.  I could not see any memory growth
> > in
> > the pg_restore process.  However, I was testing 9.1.22, not 9.1.8.
> > Also, some of the misbehaviors we've discovered along these lines
> > have
> > been timing-sensitive, meaning that the problem might or might not
> > reproduce for another person even with the same software version.
> > Are you running pg_restore locally on the same machine as the
> > server,
> > or across a network --- and if the latter, how fast is the network?
> > 
> > regards, tom lane
> > 
> 
> I was running pg_restore locally.
> The disk containing the backup, however, is on NAS.
> The NAS is mounted on the server using SMB and the FreeBSD kernel
> implementation of smbfs (mount_smbfs -I ... /mnt).
> The kernel smbfs is notoriously outdated and sometimes we get
> timeouts.
> 
> However, those timeouts happen randomly and this "out of memory"
> happens consistently.
> This time, the server was no longer under heavy load, the log lines
> are
> consecutive, there was no activity during the start of the COPY
> statement and the error.
> 
> The network is 1Gbps with a single unmanaged 24-port switch.
> The server
> has two aggregated links to the switch.
> 
> 
> I ran pg_restore locally because the server is in another office,
> connected to mine through a VPN.
> 
> Now I have arranjed for a PC to be there for me and my next test will
> be to do the restore using the latest pgadmin.
> 
> 
> Thanks,
> 
> -- Miguel Ramos


I tried the restore using pgAdmin III 1.22.1.
This time from a Windows PC connected to the server through a 1Gbps switch.

Unfortunately the result was the same, and this was my best bet.


I see (transcribed by hand from screenshot):
...
pg_restore: processing data for table "inspection.positioned_scan"
out of memory

Process returned exit code 1.


I hadn't yet set log_min_messages to 'notice'. But as client_min_messages is at 
'notice', aren't this displayed on a verbose pg_restore?
Maybe during the weekend I can have more verbose logging.

Now I'm repeating the backup (maybe the file is bad) and then I will repeat the 
restore with log_min_messages to 'notice'.

I suppose log_statement to 'all' is no longer necessary? 

What else?


-- 
Miguel Ramos




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-15 Thread Miguel Ramos

A Sex, 15-07-2016 às 07:16 -0700, Adrian Klaver escreveu:
> On 07/15/2016 12:37 AM, Miguel Ramos wrote:
> > What else?
> 
> The pg_dump file you are restoring from is a custom format.
> 
> Do you have room to do something like?:
> 
> 1) pg_restore -d some_db -U some_user -t inspection.positioned_scan 
> /mnt/paysdeloire2013_convertida2.1.dump
> 
> 

Of course, I came up with that too.
I'll get back soon.

If that fails, then it really starts looking like a bug.

Thanks,

-- 
Miguel Ramos



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-15 Thread Adrian Klaver

On 07/15/2016 12:37 AM, Miguel Ramos wrote:


A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:


A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:


I wrote:


I'm still suspicious that this might be some sort of NOTICE-
processing-
related buffer bloat.  Could you try loading the data with the
server's
log_min_messages level cranked down to NOTICE, so you can see
from
the
postmaster log whether any NOTICEs are being issued to the
pg_restore
session?


BTW, I experimented with that theory by creating a table with a
BEFORE
INSERT trigger function that emits a NOTICE, and then making
pg_restore
restore a lot of data into it.  I could not see any memory growth
in
the pg_restore process.  However, I was testing 9.1.22, not 9.1.8.
Also, some of the misbehaviors we've discovered along these lines
have
been timing-sensitive, meaning that the problem might or might not
reproduce for another person even with the same software version.
Are you running pg_restore locally on the same machine as the
server,
or across a network --- and if the latter, how fast is the network?

regards, tom lane



I was running pg_restore locally.
The disk containing the backup, however, is on NAS.
The NAS is mounted on the server using SMB and the FreeBSD kernel
implementation of smbfs (mount_smbfs -I ... /mnt).
The kernel smbfs is notoriously outdated and sometimes we get
timeouts.

However, those timeouts happen randomly and this "out of memory"
happens consistently.
This time, the server was no longer under heavy load, the log lines
are
consecutive, there was no activity during the start of the COPY
statement and the error.

The network is 1Gbps with a single unmanaged 24-port switch.
The server
has two aggregated links to the switch.


I ran pg_restore locally because the server is in another office,
connected to mine through a VPN.

Now I have arranjed for a PC to be there for me and my next test will
be to do the restore using the latest pgadmin.


Thanks,

-- Miguel Ramos



I tried the restore using pgAdmin III 1.22.1.
This time from a Windows PC connected to the server through a 1Gbps
switch.

Unfortunately the result was the same, and this was my best bet.


I see (transcribed by hand from screenshot):
...
pg_restore: processing data for table "inspection.positioned_scan"
out of memory

Process returned exit code 1.


I hadn't yet set log_min_messages to 'notice'. But as
client_min_messages is at 'notice', aren't this displayed on a verbose
pg_restore?
Maybe during the weekend I can have more verbose logging.

Now I'm repeating the backup (maybe the file is bad) and then I will
repeat the restore with log_min_messages to 'notice'.

I suppose log_statement to 'all' is no longer necessary?

What else?


The pg_dump file you are restoring from is a custom format.

Do you have room to do something like?:

1) pg_restore -d some_db -U some_user -t inspection.positioned_scan 
/mnt/paysdeloire2013_convertida2.1.dump






--
Miguel Ramos






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-15 Thread Tom Lane
Miguel Ramos  writes:
> I see (transcribed by hand from screenshot):
> ...
> pg_restore: processing data for table "inspection.positioned_scan"
> out of memory

> Process returned exit code 1.

Right, so that confirms that the OOM happens while sending data for that
table; but we're still no closer as to why.

> I hadn't yet set log_min_messages to 'notice'. But as client_min_messages is 
> at 'notice', aren't this displayed on a verbose pg_restore?

The theory I'd been considering was that NOTICE messages were being sent
by the server during the COPY (and not logged in the postmaster log
because log_min_messages wasn't high enough), but for some reason they
were not immediately processed and printed by pg_restore.  In such a case
they'd accumulate in libpq's input buffer.  After enough such messages
you'd eventually get an OOM failure.  Now the big hole in this theory is
that it's unclear why the server would be sending any notices.  But I
can't think of other good ideas.

> Now I'm repeating the backup (maybe the file is bad) and then I will repeat 
> the restore with log_min_messages to 'notice'.

OK.

> I suppose log_statement to 'all' is no longer necessary? 

I agree; we already know which statement is failing.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-15 Thread Miguel Ramos

A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:
> 
> A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> > 
> > I wrote:
> > > 
> > > I'm still suspicious that this might be some sort of NOTICE-
> > > processing-
> > > related buffer bloat.  Could you try loading the data with the
> > > server's
> > > log_min_messages level cranked down to NOTICE, so you can see
> > > from
> > > the
> > > postmaster log whether any NOTICEs are being issued to the
> > > pg_restore
> > > session?
> > 
> > BTW, I experimented with that theory by creating a table with a
> > BEFORE
> > INSERT trigger function that emits a NOTICE, and then making
> > pg_restore
> > restore a lot of data into it.  I could not see any memory growth
> > in
> > the pg_restore process.  However, I was testing 9.1.22, not 9.1.8.
> > Also, some of the misbehaviors we've discovered along these lines
> > have
> > been timing-sensitive, meaning that the problem might or might not
> > reproduce for another person even with the same software version.
> > Are you running pg_restore locally on the same machine as the
> > server,
> > or across a network --- and if the latter, how fast is the network?
> > 
> > regards, tom lane
> > 
> 
> I was running pg_restore locally.
> The disk containing the backup, however, is on NAS.
> The NAS is mounted on the server using SMB and the FreeBSD kernel
> implementation of smbfs (mount_smbfs -I ... /mnt).
> The kernel smbfs is notoriously outdated and sometimes we get
> timeouts.
> 
> However, those timeouts happen randomly and this "out of memory"
> happens consistently.
> This time, the server was no longer under heavy load, the log lines
> are
> consecutive, there was no activity during the start of the COPY
> statement and the error.
> 
> The network is 1Gbps with a single unmanaged 24-port switch.
> The server
> has two aggregated links to the switch.
> 
> 
> I ran pg_restore locally because the server is in another office,
> connected to mine through a VPN.
> 
> Now I have arranjed for a PC to be there for me and my next test will
> be to do the restore using the latest pgadmin.
> 
> 
> Thanks,
> 
> -- Miguel Ramos
> 

I tried the restore using pgAdmin III 1.22.1.
This time from a Windows PC connected to the server through a 1Gbps
switch.

Unfortunately the result was the same, and this was my best bet.


I see (transcribed by hand from screenshot):
...
pg_restore: processing data for table "inspection.positioned_scan"
out of memory

Process returned exit code 1.


I hadn't yet set log_min_messages to 'notice'. But as
client_min_messages is at 'notice', aren't this displayed on a verbose
pg_restore?
Maybe during the weekend I can have more verbose logging.

Now I'm repeating the backup (maybe the file is bad) and then I will
repeat the restore with log_min_messages to 'notice'.

I suppose log_statement to 'all' is no longer necessary? 

What else?


-- 
Miguel Ramos



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-14 Thread Miguel Ramos

Yes.
Both 9.1.8, I checked right now.

-- Miguel

A Qua, 13-07-2016 às 13:59 -0700, John R Pierce escreveu:
> On 7/13/2016 1:51 PM, Miguel Ramos wrote:
> > Finally, here are the log messages at the moment of the error.
> > It is clearly not while building indices.
> > 
> > The table in question is a big one, 111GB.
> > Fields latitude, longitude and height are arrays of length around
> > 500-
> > 700 on each row (double and real).
> > 
> > So, what does this mean?
> > Was it the client that aborted? I think I saw that "unexpected
> > message
> > type 0x58" on other types of interruptions.
> 
> is pg_restore, and the postgres server all the same version?
> 
> $ pg_restore --version
> pg_restore (PostgreSQL) 9.3.13
> 
> $ su - postgres
> -bash-4.1$ psql -c "select version()"
> version
> ---
> -
>   PostgreSQL 9.3.13 on x86_64-unknown-linux-gnu, compiled by gcc
> (GCC) 
> 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit
> (1 row)
> 
> 
> -- 
> john r pierce, recycling bits in santa cruz
> 
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-14 Thread Miguel Ramos


A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> I wrote:
> > I'm still suspicious that this might be some sort of NOTICE-
> > processing-
> > related buffer bloat.  Could you try loading the data with the
> > server's
> > log_min_messages level cranked down to NOTICE, so you can see from
> > the
> > postmaster log whether any NOTICEs are being issued to the
> > pg_restore
> > session?
> 
> BTW, I experimented with that theory by creating a table with a
> BEFORE
> INSERT trigger function that emits a NOTICE, and then making
> pg_restore
> restore a lot of data into it.  I could not see any memory growth in
> the pg_restore process.  However, I was testing 9.1.22, not 9.1.8.
> Also, some of the misbehaviors we've discovered along these lines
> have
> been timing-sensitive, meaning that the problem might or might not
> reproduce for another person even with the same software version.
> Are you running pg_restore locally on the same machine as the server,
> or across a network --- and if the latter, how fast is the network?
> 
>   regards, tom lane
> 

I was running pg_restore locally.
The disk containing the backup, however, is on NAS.
The NAS is mounted on the server using SMB and the FreeBSD kernel
implementation of smbfs (mount_smbfs -I ... /mnt).
The kernel smbfs is notoriously outdated and sometimes we get timeouts.

However, those timeouts happen randomly and this "out of memory"
happens consistently.
This time, the server was no longer under heavy load, the log lines are
consecutive, there was no activity during the start of the COPY
statement and the error.

The network is 1Gbps with a single unmanaged 24-port switch.
The server
has two aggregated links to the switch.


I ran pg_restore locally because the server is in another office, connected to 
mine through a VPN.

Now I have arranjed for a PC to be there for me and my next test will be to do 
the restore using the latest pgadmin.


Thanks,

-- Miguel Ramos


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-14 Thread Miguel Ramos

A Qua, 13-07-2016 às 17:15 -0400, Tom Lane escreveu:
> Miguel Ramos  writes:
> > So, what does this mean?
> > Was it the client that aborted? I think I saw that "unexpected
> > message
> > type 0x58" on other types of interruptions.
> 
> Yeah, 0x58 is ASCII 'X' which is a Terminate message.  Between that
> and
> the unexpected-EOF report, it's quite clear that the client side went
> belly-up, not the server.  We still don't know exactly why, but given
> that pg_restore reports "out of memory" before quitting, there must
> be
> some kind of memory leak going on inside pg_restore.
> 
> > Jul 13 20:10:10 ema postgres[97889]: [867-1] LOG:  statement: COPY
> > positioned_scan (id_dataset, id_acquired_set, sequence_number,
> > id_scan_dataset, latitude, longitude, height, srid, srid_vertical)
> > FROM stdin;
> 
> I'm guessing from the column names that you've got some PostGIS data
> types in this table.  I wonder if that's a contributing factor.
> 
> I'm still suspicious that this might be some sort of NOTICE-
> processing-
> related buffer bloat.  Could you try loading the data with the
> server's
> log_min_messages level cranked down to NOTICE, so you can see from
> the
> postmaster log whether any NOTICEs are being issued to the pg_restore
> session?
> 
>   regards, tom lane


No, no PostGIS here. The columns latitude, longitude and height are
just arrays. The first two are arrays of double and height is an array
of single. So, if anything, this could be related to array processing.

Thanks,

-- Miguel Ramos



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-14 Thread Miguel Ramos

That's 3 years and 3 months with absolutely zero maintenance.
Apart from the scripts I left back then.

During that time, it was used by an average of 10 people, some 9T of
sensor data entered at the rate of 60G/week, and another 3T of analysis
data was produced.

The expression "cutting down on maintenance" could be appropriate.
But that was not my decision.

Right now, updating is really difficult.
They're on a rush, and they are using the server 15 hours a day and
maybe they'll need the weekend as well.

Plus, the OS is now old as well, the ports tree is out of sync with the
OS... it will all be a lot harder.

-- Miguel

A Qua, 13-07-2016 às 14:19 -0700, John R Pierce escreveu:
> On 7/13/2016 2:11 PM, Miguel Ramos wrote:
> > Yes.
> > Both 9.1.8, I checked right now.
> 
> 9.1 is up to 9.1.22, thats a lot of bug fixes you're missing. 9.1.8
> was 
> released 2013-02-07, 9.1.22 in 2016-05-12
> 
> 
> -- 
> john r pierce, recycling bits in santa cruz
> 
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Tom Lane
I wrote:
> I'm still suspicious that this might be some sort of NOTICE-processing-
> related buffer bloat.  Could you try loading the data with the server's
> log_min_messages level cranked down to NOTICE, so you can see from the
> postmaster log whether any NOTICEs are being issued to the pg_restore
> session?

BTW, I experimented with that theory by creating a table with a BEFORE
INSERT trigger function that emits a NOTICE, and then making pg_restore
restore a lot of data into it.  I could not see any memory growth in
the pg_restore process.  However, I was testing 9.1.22, not 9.1.8.
Also, some of the misbehaviors we've discovered along these lines have
been timing-sensitive, meaning that the problem might or might not
reproduce for another person even with the same software version.
Are you running pg_restore locally on the same machine as the server,
or across a network --- and if the latter, how fast is the network?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread John R Pierce

On 7/13/2016 2:11 PM, Miguel Ramos wrote:

Yes.
Both 9.1.8, I checked right now.


9.1 is up to 9.1.22, thats a lot of bug fixes you're missing. 9.1.8 was 
released 2013-02-07, 9.1.22 in 2016-05-12



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Tom Lane
Miguel Ramos  writes:
> So, what does this mean?
> Was it the client that aborted? I think I saw that "unexpected message
> type 0x58" on other types of interruptions.

Yeah, 0x58 is ASCII 'X' which is a Terminate message.  Between that and
the unexpected-EOF report, it's quite clear that the client side went
belly-up, not the server.  We still don't know exactly why, but given
that pg_restore reports "out of memory" before quitting, there must be
some kind of memory leak going on inside pg_restore.

> Jul 13 20:10:10 ema postgres[97889]: [867-1] LOG:  statement: COPY 
> positioned_scan (id_dataset, id_acquired_set, sequence_number, 
> id_scan_dataset, latitude, longitude, height, srid, srid_vertical) FROM stdin;

I'm guessing from the column names that you've got some PostGIS data
types in this table.  I wonder if that's a contributing factor.

I'm still suspicious that this might be some sort of NOTICE-processing-
related buffer bloat.  Could you try loading the data with the server's
log_min_messages level cranked down to NOTICE, so you can see from the
postmaster log whether any NOTICEs are being issued to the pg_restore
session?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Adrian Klaver

On 07/13/2016 01:51 PM, Miguel Ramos wrote:



Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.

The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and real).

So, what does this mean?
Was it the client that aborted? I think I saw that "unexpected message
type 0x58" on other types of interruptions.


So where are you running the pg_restore from, manually from command line 
or from within a script?






Jul 13 20:10:10 ema postgres[97889]: [867-1] LOG:  statement: COPY 
positioned_scan (id_dataset, id_acquired_set, sequence_number, id_scan_dataset, 
latitude, longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 20:10:10 ema postgres[97889]: [867-2]
Jul 13 21:08:06 ema postgres[97889]: [868-1] ERROR:  unexpected message type 
0x58 during COPY from stdin
Jul 13 21:08:06 ema postgres[97889]: [868-2] CONTEXT:  COPY positioned_scan, 
line 2779323
Jul 13 21:08:06 ema postgres[97889]: [868-3] STATEMENT:  COPY positioned_scan 
(id_dataset, id_acquired_set, sequence_number, id_scan_dataset, latitude, 
longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 21:08:06 ema postgres[97889]: [868-4]
Jul 13 21:08:06 ema postgres[97889]: [869-1] LOG:  could not send data to 
client: Broken pipe
Jul 13 21:08:06 ema postgres[97889]: [869-2] STATEMENT:  COPY positioned_scan 
(id_dataset, id_acquired_set, sequence_number, id_scan_dataset, latitude, 
longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 21:08:06 ema postgres[97889]: [869-3]
Jul 13 21:08:06 ema postgres[97889]: [870-1] LOG:  unexpected EOF on client 
connection

Thanks,

-- Miguel


A Ter, 12-07-2016 às 15:10 -0400, Tom Lane escreveu:

Miguel Ramos  writes:

This because I have the impression that it is during index
creation,
where I think client role would be minimal.


Hard to believe really, given the spelling of the message.  But
anyway,
be sure you do the run with log_statement = all so that it's clear
what
is being worked on when the error happens.

regards, tom lane








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Miguel Ramos

Yes.
Both 9.1.8, I checked right now.

-- Miguel

A Qua, 13-07-2016 às 13:59 -0700, John R Pierce escreveu:
> On 7/13/2016 1:51 PM, Miguel Ramos wrote:
> > Finally, here are the log messages at the moment of the error.
> > It is clearly not while building indices.
> > 
> > The table in question is a big one, 111GB.
> > Fields latitude, longitude and height are arrays of length around
> > 500-
> > 700 on each row (double and real).
> > 
> > So, what does this mean?
> > Was it the client that aborted? I think I saw that "unexpected
> > message
> > type 0x58" on other types of interruptions.
> 
> is pg_restore, and the postgres server all the same version?
> 
> $ pg_restore --version
> pg_restore (PostgreSQL) 9.3.13
> 
> $ su - postgres
> -bash-4.1$ psql -c "select version()"
> version
> ---
> -
>   PostgreSQL 9.3.13 on x86_64-unknown-linux-gnu, compiled by gcc
> (GCC) 
> 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit
> (1 row)
> 
> 
> -- 
> john r pierce, recycling bits in santa cruz
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Miguel Ramos


A Ter, 12-07-2016 às 13:08 +, Sameer Kumar escreveu:
> On Tue, 12 Jul 2016, 7:25 p.m. Miguel Ramos,
>  wrote:
> > I found two relevant threads on the mailing-lists.
> > The most recent one sugested that postgresql was being configured
> > to use
> > more memory than what's available.
> > The older one sugested that the system limits on the size of the
> > data or
> > stack segments were lower than required.
> > 
> > So here are some server parameters (relevant or otherwise):
> > 
> >  > max_connections = 100
> >  > shared_buffers = 4GB  -- 25% of RAM
> >  > temp_buffers = 32MB  -- irrelevant?
> >  > work_mem = 64MB
> >  > maintenance_work_mem = was 1G lowered to 256M then 64M
> 
> 
> Why did you lower it? I think increasing it should help better. But
> 1GB seems like fine.

The advise was on that thread and maybe the problem was very different.
The idea I get is that PostgreSQL can always find a way to do its work,
maybe using an out of core algorithm.

If you tell it to use a lot of memory, then it will try to use RAM and
then it really may run out of memory.

So, basically, increasing the memory available is a performance
improvement, if you feel safe that the memory really is available.

But maybe that logic applies only to work_mem...
And it's also work_mem that is difficult to bound, according to the
manual.
I don't really know...



> >  > wal_buffers = -1  -- should mean 1/32 of shared_buffers = 128MB
> 
> Increase this during the restore, may be 512MB

I retain the advise, but now I have posted the log messages to the
list.

> >  > checkpoint_segments = 64  -- WAL segments are 16MB
> >  > effective_cache_size = 8GB  -- irrelevant?
> > 
> > 
> > I suspect that the restore fails when constructing the indices.
> > After
> > the process is aborted, the data appears to be all or most there,
> > but no
> > indices.
> 
> What is logged in database log files? Have you checked that?

This time I collected the logs.
I posted the 10 relevant lines as a reply to another message.
I'll repeat only the ERROR line here:

Jul 13 21:08:06 ema postgres[97889]: [868-1] ERROR:  unexpected message type 
0x58 during COPY from stdin



> What are your vm.dirty_ratio and vm.dirty_background_ratio? I think
> reducing them may help. But can not really say what exactly would
> help unless you are able to get the error source in db logs

This is a FreeBSD server.
I'm not really sure what the equivalent would be.
Also, I don't think tunning the VM would help.

This is quite a deterministic abort, 12-13 hours after the beginning of
the restore, and does not change much whether it is done during the
night or during the day with 10 people working intensively.


Thanks,

-- Miguel



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread John R Pierce

On 7/13/2016 1:51 PM, Miguel Ramos wrote:

Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.

The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and real).

So, what does this mean?
Was it the client that aborted? I think I saw that "unexpected message
type 0x58" on other types of interruptions.


is pg_restore, and the postgres server all the same version?

$ pg_restore --version
pg_restore (PostgreSQL) 9.3.13

$ su - postgres
-bash-4.1$ psql -c "select version()"
version

 PostgreSQL 9.3.13 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 
4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit

(1 row)


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Miguel Ramos


Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.

The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and real).

So, what does this mean?
Was it the client that aborted? I think I saw that "unexpected message
type 0x58" on other types of interruptions.



Jul 13 20:10:10 ema postgres[97889]: [867-1] LOG:  statement: COPY 
positioned_scan (id_dataset, id_acquired_set, sequence_number, id_scan_dataset, 
latitude, longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 20:10:10 ema postgres[97889]: [867-2]
Jul 13 21:08:06 ema postgres[97889]: [868-1] ERROR:  unexpected message type 
0x58 during COPY from stdin
Jul 13 21:08:06 ema postgres[97889]: [868-2] CONTEXT:  COPY positioned_scan, 
line 2779323
Jul 13 21:08:06 ema postgres[97889]: [868-3] STATEMENT:  COPY positioned_scan 
(id_dataset, id_acquired_set, sequence_number, id_scan_dataset, latitude, 
longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 21:08:06 ema postgres[97889]: [868-4]
Jul 13 21:08:06 ema postgres[97889]: [869-1] LOG:  could not send data to 
client: Broken pipe
Jul 13 21:08:06 ema postgres[97889]: [869-2] STATEMENT:  COPY positioned_scan 
(id_dataset, id_acquired_set, sequence_number, id_scan_dataset, latitude, 
longitude, height, srid, srid_vertical) FROM stdin;
Jul 13 21:08:06 ema postgres[97889]: [869-3]
Jul 13 21:08:06 ema postgres[97889]: [870-1] LOG:  unexpected EOF on client 
connection

Thanks,

-- Miguel


A Ter, 12-07-2016 às 15:10 -0400, Tom Lane escreveu:
> Miguel Ramos  writes:
> > This because I have the impression that it is during index
> > creation,
> > where I think client role would be minimal.
> 
> Hard to believe really, given the spelling of the message.  But
> anyway,
> be sure you do the run with log_statement = all so that it's clear
> what
> is being worked on when the error happens.
> 
>   regards, tom lane




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-13 Thread Karsten Hilbert
On Tue, Jul 12, 2016 at 12:25:08PM +0100, Miguel Ramos wrote:

> > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> > pg_restore: [custom archiver] out of memory
> > 12:09:56.58  9446.593u+1218.508s 24.3%  167+2589k  6+0io  0pf+0sw 6968822cs
...
> I suspect that the restore fails when constructing the indices. After the
> process is aborted, the data appears to be all or most there, but no
> indices.
...
> I don't know what else to try.

You could try restoring w/o indices and re-adding them later.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Tom Lane
Miguel Ramos  writes:
> This because I have the impression that it is during index creation,
> where I think client role would be minimal.

Hard to believe really, given the spelling of the message.  But anyway,
be sure you do the run with log_statement = all so that it's clear what
is being worked on when the error happens.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Miguel Ramos
A Ter, 12-07-2016 às 11:58 -0400, Tom Lane escreveu:
> 
> Anyway, it would be useful to try running the restore with a more
> modern
> version of pg_restore, to see if that helps.
> 
>   regards, tom lane
> 
> 

I have the scheduled restart tonight.
So, I will do the other test first, nevertheless.

This because I have the impression that it is during index creation,
where I think client role would be minimal.
Maybe I saw something in the logs when this problem was first reported
internally.

I can't spare a second 700G for doing both tests concurrently.

I will get back when I have more news.

Thanks,

--
Miguel


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Tom Lane
Miguel Ramos  writes:
> Às 15:40 de 12-07-2016, Tom Lane escreveu:
>> Unless you're running pg_restore under a really small ulimit, this would
>> seem to suggest some kind of memory leak in pg_restore itself.  I wonder
>> how many objects in your dump (how long is "pg_restore -l" output)?

> pg_restore -l | wc gives me:
>  10557984   70675

> It looks small to me.

Yeah, the archive TOC is clearly not large enough to cause any problem in
itself.  I'm wondering at this point about libpq buffer bloat.  We've
fixed a number of problems in that area over the last few years, though
in a quick review of the commit logs I don't see anything that clearly
bears on your problem.  (Commit 86888054a92aeca4 is pretty interesting
but applies to mostly-server-to-client data transfer, which is the wrong
direction.)  I wonder whether your data is such that loading it would
trigger a bunch of NOTICEs from the server?

Anyway, it would be useful to try running the restore with a more modern
version of pg_restore, to see if that helps.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Miguel Ramos


Às 16:23 de 12-07-2016, Miguel Ramos escreveu:

It looks to me like this error is pg_restore itself running out of
memory,
not reporting a server-side OOM condition.  You could verify that by
looking in the server log to see whether any out-of-memory error appeared
there.  But assuming that I'm right, the other responses suggesting
tweaking server configurations are not on-point at all.

[...]



- PostgreSQL 9.1.8 custom compiled to get 32kB blocks


9.1.8 is pretty old ...


Of course, I could try a recent pg_restore.
That'll take me a bit of time too, but I can try.

--
Miguel Ramos



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Miguel Ramos


Às 15:40 de 12-07-2016, Tom Lane escreveu:

Miguel Ramos  writes:

We have backed up a database and now when trying to restore it to the
same server we get this:



# pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
pg_restore: [custom archiver] out of memory


It looks to me like this error is pg_restore itself running out of memory,
not reporting a server-side OOM condition.  You could verify that by
looking in the server log to see whether any out-of-memory error appeared
there.  But assuming that I'm right, the other responses suggesting
tweaking server configurations are not on-point at all.


The logs of the last attempt are already gone.
Obviously, I'll keep tonight's logs.



Unless you're running pg_restore under a really small ulimit, this would
seem to suggest some kind of memory leak in pg_restore itself.  I wonder
how many objects in your dump (how long is "pg_restore -l" output)?


pg_restore -l | wc gives me:
10557984   70675

It looks small to me.
We don't have a lot of tables, instead we have really huge tables.
We try to keep the schema normalized when possible, and we only don't do 
that when the resulting tuples become too small for PostgreSQL, when the 
row overhead becomes prohibitive.





- PostgreSQL 9.1.8 custom compiled to get 32kB blocks


9.1.8 is pretty old ...


Oh, I'll forward your email to those who were older than I, the many far 
wiser than I, but whose love of database servers is certainly not 
stronger than mine, by far.


Thanks,


--
Miguel Ramos


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Tom Lane
Miguel Ramos  writes:
> We have backed up a database and now when trying to restore it to the 
> same server we get this:

>>> # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
>>> pg_restore: [custom archiver] out of memory

It looks to me like this error is pg_restore itself running out of memory,
not reporting a server-side OOM condition.  You could verify that by
looking in the server log to see whether any out-of-memory error appeared
there.  But assuming that I'm right, the other responses suggesting
tweaking server configurations are not on-point at all.

Unless you're running pg_restore under a really small ulimit, this would
seem to suggest some kind of memory leak in pg_restore itself.  I wonder
how many objects in your dump (how long is "pg_restore -l" output)?

> - PostgreSQL 9.1.8 custom compiled to get 32kB blocks

9.1.8 is pretty old ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Sameer Kumar
On Tue, 12 Jul 2016, 7:25 p.m. Miguel Ramos, <
org.postgre...@miguel.ramos.name> wrote:

>
> Hi,
>
> We have backed up a database and now when trying to restore it to the
> same server we get this:
>
>  > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
>  > pg_restore: [custom archiver] out of memory
>  > 12:09:56.58  9446.593u+1218.508s 24.3%  167+2589k  6+0io  0pf+0sw
> 6968822cs
>
>
> Some information about the application:
>
> - We have sensor data, including pictures, and number crunshing output,
> then so the large tables on this database have 319, 279, 111 and 26GB.
> Mostly on TOAST pages, but the 279GB one divides it evenly. This
> database is 765GB. We try to keep them under 4TB.
> - Transactions are large, some 100 MB at a time.
> - We also use PostGIS.
>
> About the server (dedicated):
>
> - FreeBSD 9.1-RELEASE #0 on AMD64
> - 16 GB of RAM
> - 8x3GB hardware RAID 10
> - 10TB slice for pgdata UFS-formatted and 32kB block
> - PostgreSQL 9.1.8 custom compiled to get 32kB blocks
> - Installed in 2013 with ~10 people working with it, 145 days uptime today.
>
> I found two relevant threads on the mailing-lists.
> The most recent one sugested that postgresql was being configured to use
> more memory than what's available.
> The older one sugested that the system limits on the size of the data or
> stack segments were lower than required.
>
> So here are some server parameters (relevant or otherwise):
>
>  > max_connections = 100
>  > shared_buffers = 4GB  -- 25% of RAM
>  > temp_buffers = 32MB  -- irrelevant?
>  > work_mem = 64MB
>  > maintenance_work_mem = was 1G lowered to 256M then 64M
>


Why did you lower it? I think increasing it should help better. But 1GB
seems like fine.


 > wal_buffers = -1  -- should mean 1/32 of shared_buffers = 128MB
>

Increase this during the restore, may be 512MB

 > checkpoint_segments = 64  -- WAL segments are 16MB
>  > effective_cache_size = 8GB  -- irrelevant?
>
>
> I suspect that the restore fails when constructing the indices. After
> the process is aborted, the data appears to be all or most there, but no
> indices.
>

What is logged in database log files? Have you checked that?

So, all I did so far, was lowering maintenance_work_mem and it didn't work.
>
> System limits, as you can see, are at defaults (32GB for data and 512MB
> for stack):
>
>  > # limit
>  > cputime  unlimited
>  > filesize unlimited
>  > datasize 33554432 kbytes
>  > stacksize524288 kbytes
>  > coredumpsize unlimited
>  > memoryuseunlimited
>  > vmemoryuse   unlimited
>  > descriptors  11095
>  > memorylocked unlimited
>  > maxproc  5547
>  > sbsize   unlimited
>  > swapsize unlimited
>
> Shared memory is configured to allow for the single shared memory
> segment postgresql appears to use, plus a bit of extra (8GB):
>
>  > # ipcs -M
>  > shminfo:
>  > shmmax:   8589934592(max shared memory segment size)
>  > shmmin:1(min shared memory segment size)
>  > shmmni:  192(max number of shared memory
> identifiers)
>  > shmseg:  128(max shared memory segments per process)
>  > shmall:  2097152(max amount of shared memory in pages)
>
> And semaphores (irrelevant?)...
>
>  > # ipcs -S
>  > seminfo:
>  > semmni:  256(# of semaphore identifiers)
>  > semmns:  512(# of semaphores in system)
>  > semmnu:  256(# of undo structures in system)
>  > semmsl:  340(max # of semaphores per id)
>  > semopm:  100(max # of operations per semop call)
>  > semume:   50(max # of undo entries per process)
>  > semusz:  632(size in bytes of undo structure)
>  > semvmx:32767(semaphore maximum value)
>  > semaem:16384(adjust on exit max value)
>

What are your vm.dirty_ratio and vm.dirty_background_ratio? I think
reducing them may help. But can not really say what exactly would help
unless you are able to get the error source in db logs


>
>
> I don't know what else to try.
> I lowered maintenance_work_mem without restarting the server.
> In some of the attempts, but not all, the restore was done while people
> were working.
>
> Each attempt takes 12 hours...
> We couldn't use the directory -Fd dump because it's postgresql 9.1.
> The original database is still on the server, this is a test restore.
>
> We have about one or two months of slack before we really need to remove
> them from the server to recover space.
>
>
> --
> Miguel Ramos
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Felipe Santos
2016-07-12 8:54 GMT-03:00 Miguel Ramos :

>
> Às 12:32 de 12-07-2016, Felipe Santos escreveu:
>
>> I would try lowering max_connections to 50 and then set work_mem to 128MB.
>>
>> After that restart your server and retry the restore.
>>
>
> Ok, I will try restarting tonight.
> work_mem is the parameter I was most afraid of.
>
> I'll post some news in 24h...
>
>
> --
> Miguel Ramos
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

After the restore (being it a success or failure) don't forget to set the
parameters back to their original values (max_conn = 100 and work_mem=64MB).

BR


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Miguel Ramos


Às 12:32 de 12-07-2016, Felipe Santos escreveu:

I would try lowering max_connections to 50 and then set work_mem to 128MB.

After that restart your server and retry the restore.


Ok, I will try restarting tonight.
work_mem is the parameter I was most afraid of.

I'll post some news in 24h...

--
Miguel Ramos


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore out of memory

2016-07-12 Thread Felipe Santos
2016-07-12 8:25 GMT-03:00 Miguel Ramos :

>
> Hi,
>
> We have backed up a database and now when trying to restore it to the same
> server we get this:
>
> > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> > pg_restore: [custom archiver] out of memory
> > 12:09:56.58  9446.593u+1218.508s 24.3%  167+2589k  6+0io  0pf+0sw
> 6968822cs
>
>
> Some information about the application:
>
> - We have sensor data, including pictures, and number crunshing output,
> then so the large tables on this database have 319, 279, 111 and 26GB.
> Mostly on TOAST pages, but the 279GB one divides it evenly. This database
> is 765GB. We try to keep them under 4TB.
> - Transactions are large, some 100 MB at a time.
> - We also use PostGIS.
>
> About the server (dedicated):
>
> - FreeBSD 9.1-RELEASE #0 on AMD64
> - 16 GB of RAM
> - 8x3GB hardware RAID 10
> - 10TB slice for pgdata UFS-formatted and 32kB block
> - PostgreSQL 9.1.8 custom compiled to get 32kB blocks
> - Installed in 2013 with ~10 people working with it, 145 days uptime today.
>
> I found two relevant threads on the mailing-lists.
> The most recent one sugested that postgresql was being configured to use
> more memory than what's available.
> The older one sugested that the system limits on the size of the data or
> stack segments were lower than required.
>
> So here are some server parameters (relevant or otherwise):
>
> > max_connections = 100
> > shared_buffers = 4GB  -- 25% of RAM
> > temp_buffers = 32MB  -- irrelevant?
> > work_mem = 64MB
> > maintenance_work_mem = was 1G lowered to 256M then 64M
> > wal_buffers = -1  -- should mean 1/32 of shared_buffers = 128MB
> > checkpoint_segments = 64  -- WAL segments are 16MB
> > effective_cache_size = 8GB  -- irrelevant?
>
>
> I suspect that the restore fails when constructing the indices. After the
> process is aborted, the data appears to be all or most there, but no
> indices.
> So, all I did so far, was lowering maintenance_work_mem and it didn't work.
>
> System limits, as you can see, are at defaults (32GB for data and 512MB
> for stack):
>
> > # limit
> > cputime  unlimited
> > filesize unlimited
> > datasize 33554432 kbytes
> > stacksize524288 kbytes
> > coredumpsize unlimited
> > memoryuseunlimited
> > vmemoryuse   unlimited
> > descriptors  11095
> > memorylocked unlimited
> > maxproc  5547
> > sbsize   unlimited
> > swapsize unlimited
>
> Shared memory is configured to allow for the single shared memory segment
> postgresql appears to use, plus a bit of extra (8GB):
>
> > # ipcs -M
> > shminfo:
> > shmmax:   8589934592(max shared memory segment size)
> > shmmin:1(min shared memory segment size)
> > shmmni:  192(max number of shared memory identifiers)
> > shmseg:  128(max shared memory segments per process)
> > shmall:  2097152(max amount of shared memory in pages)
>
> And semaphores (irrelevant?)...
>
> > # ipcs -S
> > seminfo:
> > semmni:  256(# of semaphore identifiers)
> > semmns:  512(# of semaphores in system)
> > semmnu:  256(# of undo structures in system)
> > semmsl:  340(max # of semaphores per id)
> > semopm:  100(max # of operations per semop call)
> > semume:   50(max # of undo entries per process)
> > semusz:  632(size in bytes of undo structure)
> > semvmx:32767(semaphore maximum value)
> > semaem:16384(adjust on exit max value)
>
>
> I don't know what else to try.
> I lowered maintenance_work_mem without restarting the server.
> In some of the attempts, but not all, the restore was done while people
> were working.
>
> Each attempt takes 12 hours...
> We couldn't use the directory -Fd dump because it's postgresql 9.1.
> The original database is still on the server, this is a test restore.
>
> We have about one or two months of slack before we really need to remove
> them from the server to recover space.
>
>
> --
> Miguel Ramos
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Hi Miguel,

I would try lowering max_connections to 50 and then set work_mem to 128MB.

After that restart your server and retry the restore.

Tell us if that helps.

Regards,


[GENERAL] pg_restore out of memory

2016-07-12 Thread Miguel Ramos


Hi,

We have backed up a database and now when trying to restore it to the 
same server we get this:


> # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> pg_restore: [custom archiver] out of memory
> 12:09:56.58  9446.593u+1218.508s 24.3%  167+2589k  6+0io  0pf+0sw 
6968822cs



Some information about the application:

- We have sensor data, including pictures, and number crunshing output, 
then so the large tables on this database have 319, 279, 111 and 26GB. 
Mostly on TOAST pages, but the 279GB one divides it evenly. This 
database is 765GB. We try to keep them under 4TB.

- Transactions are large, some 100 MB at a time.
- We also use PostGIS.

About the server (dedicated):

- FreeBSD 9.1-RELEASE #0 on AMD64
- 16 GB of RAM
- 8x3GB hardware RAID 10
- 10TB slice for pgdata UFS-formatted and 32kB block
- PostgreSQL 9.1.8 custom compiled to get 32kB blocks
- Installed in 2013 with ~10 people working with it, 145 days uptime today.

I found two relevant threads on the mailing-lists.
The most recent one sugested that postgresql was being configured to use 
more memory than what's available.
The older one sugested that the system limits on the size of the data or 
stack segments were lower than required.


So here are some server parameters (relevant or otherwise):

> max_connections = 100
> shared_buffers = 4GB  -- 25% of RAM
> temp_buffers = 32MB  -- irrelevant?
> work_mem = 64MB
> maintenance_work_mem = was 1G lowered to 256M then 64M
> wal_buffers = -1  -- should mean 1/32 of shared_buffers = 128MB
> checkpoint_segments = 64  -- WAL segments are 16MB
> effective_cache_size = 8GB  -- irrelevant?


I suspect that the restore fails when constructing the indices. After 
the process is aborted, the data appears to be all or most there, but no 
indices.

So, all I did so far, was lowering maintenance_work_mem and it didn't work.

System limits, as you can see, are at defaults (32GB for data and 512MB 
for stack):


> # limit
> cputime  unlimited
> filesize unlimited
> datasize 33554432 kbytes
> stacksize524288 kbytes
> coredumpsize unlimited
> memoryuseunlimited
> vmemoryuse   unlimited
> descriptors  11095
> memorylocked unlimited
> maxproc  5547
> sbsize   unlimited
> swapsize unlimited

Shared memory is configured to allow for the single shared memory 
segment postgresql appears to use, plus a bit of extra (8GB):


> # ipcs -M
> shminfo:
> shmmax:   8589934592(max shared memory segment size)
> shmmin:1(min shared memory segment size)
> shmmni:  192(max number of shared memory identifiers)
> shmseg:  128(max shared memory segments per process)
> shmall:  2097152(max amount of shared memory in pages)

And semaphores (irrelevant?)...

> # ipcs -S
> seminfo:
> semmni:  256(# of semaphore identifiers)
> semmns:  512(# of semaphores in system)
> semmnu:  256(# of undo structures in system)
> semmsl:  340(max # of semaphores per id)
> semopm:  100(max # of operations per semop call)
> semume:   50(max # of undo entries per process)
> semusz:  632(size in bytes of undo structure)
> semvmx:32767(semaphore maximum value)
> semaem:16384(adjust on exit max value)


I don't know what else to try.
I lowered maintenance_work_mem without restarting the server.
In some of the attempts, but not all, the restore was done while people 
were working.


Each attempt takes 12 hours...
We couldn't use the directory -Fd dump because it's postgresql 9.1.
The original database is still on the server, this is a test restore.

We have about one or two months of slack before we really need to remove 
them from the server to recover space.



--
Miguel Ramos


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore error-s after pg_dump

2016-06-22 Thread Adrian Klaver

On 06/22/2016 04:00 AM, SDAG wrote:

Hi
Postgres version :
*PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit*

We transfer from one server to another (with better cpu an ram) our system
using vm converter and when I try to backup database have an error :


The above is going to need more explanation.

Did you convert the VM in place or move from one VM to another?

Is the architecture of the new VM the same as the old?

Did the conversion take place on a live VM or was it stopped?


Just to be clear the dump below was taken after the conversion, correct?



*pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading table inheritance information
pg_dump: reading rewrite rules
pg_dump: finding extension members
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "account_account"
pg_dump: [archiver (db)] query failed: ERROR:  missing chunk number 0 for
toast value 3297740 in pg_toast_2619
pg_dump: [archiver (db)] query was: SELECT a.attnum, a.attname, a.atttypmod,
a.attstattarget, a.attstorage, t.typstorage, a.attnotnull, a.atthasdef,
a.attisdropped, a.attlen, a.attalign, a.attislocal,
pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname,
array_to_string(a.attoptions, ', ') AS attoptions, CASE WHEN a.attcollation
<> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,
pg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name)
|| ' ' || pg_catalog.quote_literal(option_value) FROM
pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E',
') AS attfdwoptions FROM pg_catalog.pg_attribute a LEFT JOIN
pg_catalog.pg_type t ON a.atttypid = t.oid WHERE a.attrelid =
'274619'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2 ORDER BY
a.attrelid, a.attnum*

I solve it using *reindexdb *,after I try* pg_dump -U postgres my_db >
/home/mydb.backup* and it was successfull. Then I try to restore database to
ensure that backup is valid
*psql -U postgres new_db < /home/mydb.backup*

And have an errors :


Does table tbl1 have an id column?



*ERROR : extra data after last expected column
Context: COPY tbl1, line1: "1 2013-12-02 2013-12-02  9387.74
9775.4621148549086494"   6180.95   80262 "
ERROR : column "id" of relation "tbl1" does not exists
invalid command \N
invalid command \N
invalid command \N
.
invalid command \N
invalid command \.
ERROR:  syntax error at or near "87685"
LINE 1: 87685 SO87690 1 170468 2015-05-30 2015 05 30
^
invalid command \.



The below looks like you are trying to restore over existing data.



ERROR:  duplicate key value violates unique constraint "ir_act_client_pkey"
DETAIL:  Key (id)=(103) already exists.
CONTEXT:  COPY ir_act_client, line 21: ""
ERROR:  duplicate key value violates unique constraint
"ir_act_report_xml_pkey"
DETAIL:  Key (id)=(733) already exists.
CONTEXT:  COPY ir_act_report_xml, line 59: ""
ERROR:  duplicate key value violates unique constraint "ir_act_server_pkey"
DETAIL:  Key (id)=(703) already exists.
CONTEXT:  COPY ir_act_server, line 6: ""
ERROR:  duplicate key value violates unique constraint "ir_act_window_pkey"
DETAIL:  Key (id)=(1) already exists.
CONTEXT:  COPY ir_act_window, line 235: "334Last Product Inventories
ir.actions.act_window   \N  1   2013-07-03 10:39:48.399509
2013-12-16 16:38:..."
ERROR:  duplicate key value violates unique constraint
"ir_act_window_group_rel_act_id_gid_key"
DETAIL:  Key (act_id, gid)=(76, 1) already exists.
CONTEXT:  COPY ir_act_window_group_rel, line 14: ""
ERROR:  duplicate key value violates unique constraint
"ir_act_window_view_pkey"
DETAIL:  Key (id)=(100) already exists.
CONTEXT:  COPY ir_act_window_view, line 88: ""*


Any advice to solve this problem ?



--
View this message in context: 
http://postgresql.nabble.com/pg-restore-error-s-after-pg-dump-tp5909084.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:

[GENERAL] pg_restore error-s after pg_dump

2016-06-22 Thread SDAG
Hi
Postgres version : 
*PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit*

We transfer from one server to another (with better cpu an ram) our system
using vm converter and when I try to backup database have an error :

*pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading table inheritance information
pg_dump: reading rewrite rules
pg_dump: finding extension members
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "account_account"
pg_dump: [archiver (db)] query failed: ERROR:  missing chunk number 0 for
toast value 3297740 in pg_toast_2619
pg_dump: [archiver (db)] query was: SELECT a.attnum, a.attname, a.atttypmod,
a.attstattarget, a.attstorage, t.typstorage, a.attnotnull, a.atthasdef,
a.attisdropped, a.attlen, a.attalign, a.attislocal,
pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname,
array_to_string(a.attoptions, ', ') AS attoptions, CASE WHEN a.attcollation
<> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,
pg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name)
|| ' ' || pg_catalog.quote_literal(option_value) FROM
pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E',
') AS attfdwoptions FROM pg_catalog.pg_attribute a LEFT JOIN
pg_catalog.pg_type t ON a.atttypid = t.oid WHERE a.attrelid =
'274619'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2 ORDER BY
a.attrelid, a.attnum*

I solve it using *reindexdb *,after I try* pg_dump -U postgres my_db >
/home/mydb.backup* and it was successfull. Then I try to restore database to
ensure that backup is valid 
*psql -U postgres new_db < /home/mydb.backup*

And have an errors :

*ERROR : extra data after last expected column
Context: COPY tbl1, line1: "1 2013-12-02 2013-12-02  9387.74 
9775.4621148549086494"   6180.95   80262 "
ERROR : column "id" of relation "tbl1" does not exists
invalid command \N
invalid command \N
invalid command \N
. 
invalid command \N
invalid command \.
ERROR:  syntax error at or near "87685"
LINE 1: 87685 SO87690 1 170468 2015-05-30 2015 05 30 
^
invalid command \.

ERROR:  duplicate key value violates unique constraint "ir_act_client_pkey"
DETAIL:  Key (id)=(103) already exists.
CONTEXT:  COPY ir_act_client, line 21: ""
ERROR:  duplicate key value violates unique constraint
"ir_act_report_xml_pkey"
DETAIL:  Key (id)=(733) already exists.
CONTEXT:  COPY ir_act_report_xml, line 59: ""
ERROR:  duplicate key value violates unique constraint "ir_act_server_pkey"
DETAIL:  Key (id)=(703) already exists.
CONTEXT:  COPY ir_act_server, line 6: ""
ERROR:  duplicate key value violates unique constraint "ir_act_window_pkey"
DETAIL:  Key (id)=(1) already exists.
CONTEXT:  COPY ir_act_window, line 235: "334Last Product Inventories   
ir.actions.act_window   \N  1   2013-07-03 10:39:48.399509 
2013-12-16 16:38:..."
ERROR:  duplicate key value violates unique constraint
"ir_act_window_group_rel_act_id_gid_key"
DETAIL:  Key (act_id, gid)=(76, 1) already exists.
CONTEXT:  COPY ir_act_window_group_rel, line 14: ""
ERROR:  duplicate key value violates unique constraint
"ir_act_window_view_pkey"
DETAIL:  Key (id)=(100) already exists.
CONTEXT:  COPY ir_act_window_view, line 88: ""*


Any advice to solve this problem ?



--
View this message in context: 
http://postgresql.nabble.com/pg-restore-error-s-after-pg-dump-tp5909084.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-30 Thread Tom Lane
Amit Langote  writes:
> destdb=# ALTER TABLE c ADD CONSTRAINT p_a_check CHECK (a IN ('a', 'b', 'c'));
> destdb=# \d c
> ...
> Check constraints:
> "p_a_check" CHECK (a::text = ANY (ARRAY['a'::character varying,
> 'b'::character varying, 'c'::character varying]::text[]))

Hm.  It seems like the parser is doing something weird with IN there.
If you just do a simple comparison the constant ends up as TEXT to start
with:

regression=# CREATE TABLE pp (a varchar, CHECK (a = 'a')); 
regression=# \d pp
...
Check constraints:
"pp_a_check" CHECK (a::text = 'a'::text)

Or for that matter

regression=# CREATE TABLE p (a varchar, CHECK (a = any(array['a', 'b', 'c'])));
regression=# \d p
...
Check constraints:
"p_a_check" CHECK (a::text = ANY (ARRAY['a'::text, 'b'::text, 'c'::text]))

I wonder why you don't get an array of text constants in the IN case.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-30 Thread Amit Langote
On Wed, Mar 30, 2016 at 6:45 AM, Tom Lane  wrote:
> Joshua Ma  writes:
>> This might not be a common case, but we're using pg_dump in a testing
>> environment to check migrations - 1) we initialize the db from HEAD,
>> pg_dump it, 2) we initialize the db from migration_base.sql, apply
>> migrations, pg_dump it, and 3) compare the two dumps to verify that our
>> migrations are correct wrt schema.
>
>> However, we're seeing pg_restore transforming our check constraints with
>> different casting.
>
> It's not really different.  What you're seeing is pg_dump (or actually
> ruleutils.c) choosing to dump some implicit casts explicitly to ensure
> that the expression is parsed the same way next time.  It might be
> overly conservative to do so, but we've found that erring in this
> direction tends to avoid breakage when the result is loaded into another
> server version; it's a bit like the intentional overparenthesization.

Saw a post on pgsql-bugs awhile back that looked related:

http://www.postgresql.org/message-id/011001d17b05$4e70c000$eb524000$@commoninf.com

In their case, the restored expression in different shape caused some
problems elsewhere. An example:

$ createdb srcdb
$ psql srcdb
psql (9.6devel)
Type "help" for help.

srcdb=# CREATE TABLE p (a varchar, CHECK (a IN ('a', 'b', 'c')));
CREATE TABLE

srcdb=# ^D\q

$ createdb destdb
$ pg_dump srcdb | psql destdb
$ psql destdb
psql (9.6devel)
Type "help" for help.

destdb=# \d
   List of relations
 Schema | Name | Type  | Owner
+--+---+---
 public | p| table | amit
(1 row)

destdb=# CREATE TABLE c (LIKE p);
CREATE TABLE

destdb=# ALTER TABLE c ADD CONSTRAINT p_a_check CHECK (a IN ('a', 'b', 'c'));
ALTER TABLE

destdb=# \d c
Table "public.c"
 Column |   Type| Modifiers
+---+---
 a  | character varying |
Check constraints:
"p_a_check" CHECK (a::text = ANY (ARRAY['a'::character varying,
'b'::character varying, 'c'::character varying]::text[]))

destdb=# INSERT INTO c VALUES ('a'), ('b'), ('c');
INSERT 0 3

destdb=# ALTER TABLE c INHERIT p;
ERROR:  child table "c" has different definition for check constraint
"p_a_check"

Hmm, how to go about to get it to match what p_a_check looks on p?  Maybe:

destdb=# CREATE TABLE c (LIKE p INCLUDING CONSTRAINTS);

destdb=# \d c
Table "public.c"
 Column |   Type| Modifiers
+---+---
 a  | character varying |
Check constraints:
"p_a_check" CHECK (a::text = ANY (ARRAY['a'::character
varying::text, 'b'::character varying::text, 'c'::character
varying::text]))

Thanks,
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-29 Thread David G. Johnston
On Tue, Mar 29, 2016 at 7:40 PM, Tom Lane  wrote:

> "David G. Johnston"  writes:
> > On Tue, Mar 29, 2016 at 2:45 PM, Tom Lane  wrote:
> >> It's not really different.  What you're seeing is pg_dump (or actually
> >> ruleutils.c) choosing to dump some implicit casts explicitly to ensure
> >> that the expression is parsed the same way next time.
>
> > ​Why don't we just use ruleutils.c to generate \d results so that what we
> > end up showing is canonical?
>
> We do.  AFAIK, what psql's \d shows in these cases is the same as what
> pg_dump will print.  Joshua's complaint is that it isn't necessarily
> identical to what was input.
>

​Then I must be lacking info here because given that the two constraints
shown using \d are equivalent if we were to output a canonical form there
could only be one valid representation that could be output.

Looking at it in this manner Joshua's goal is achieved even if we don't
output exactly what was input - because at least regardless of the input
form the attempt to compare direct HEAD and migration result​ would be the
same result.

I guess my "so that" clause is overly optimistic - we'd likely need to
expend more effort to actually derive a canonical version of a given
arbitrary constraint and our current implementation is allowed to simplify
without deriving a canonical form: in this case failing to consistently
choose whether to cast the array elements and leave the array type itself
implied versus leaving the array elements in their natural form and casting
the final array to the necessary type.  And, at the same time, ideally
recognizing that the built-in types "character varying" and "text" are
compatible and thus ('value'::varchar)::text should be simplified to
'value'::text.

David J.


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-29 Thread Tom Lane
"David G. Johnston"  writes:
> On Tue, Mar 29, 2016 at 2:45 PM, Tom Lane  wrote:
>> It's not really different.  What you're seeing is pg_dump (or actually
>> ruleutils.c) choosing to dump some implicit casts explicitly to ensure
>> that the expression is parsed the same way next time.

> ​Why don't we just use ruleutils.c to generate \d results so that what we
> end up showing is canonical?

We do.  AFAIK, what psql's \d shows in these cases is the same as what
pg_dump will print.  Joshua's complaint is that it isn't necessarily
identical to what was input.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-29 Thread David G. Johnston
On Tue, Mar 29, 2016 at 2:45 PM, Tom Lane  wrote:

> Joshua Ma  writes:
> > This might not be a common case, but we're using pg_dump in a testing
> > environment to check migrations - 1) we initialize the db from HEAD,
> > pg_dump it, 2) we initialize the db from migration_base.sql, apply
> > migrations, pg_dump it, and 3) compare the two dumps to verify that our
> > migrations are correct wrt schema.
>
> > However, we're seeing pg_restore transforming our check constraints with
> > different casting.
>
> It's not really different.  What you're seeing is pg_dump (or actually
> ruleutils.c) choosing to dump some implicit casts explicitly to ensure
> that the expression is parsed the same way next time.  It might be
> overly conservative to do so, but we've found that erring in this
> direction tends to avoid breakage when the result is loaded into another
> server version; it's a bit like the intentional overparenthesization.
>

​Why don't we just use ruleutils.c to generate \d results so that what we
end up showing is canonical?

David J.


Re: [GENERAL] pg_restore casts check constraints differently

2016-03-29 Thread Tom Lane
Joshua Ma  writes:
> This might not be a common case, but we're using pg_dump in a testing
> environment to check migrations - 1) we initialize the db from HEAD,
> pg_dump it, 2) we initialize the db from migration_base.sql, apply
> migrations, pg_dump it, and 3) compare the two dumps to verify that our
> migrations are correct wrt schema.

> However, we're seeing pg_restore transforming our check constraints with
> different casting.

It's not really different.  What you're seeing is pg_dump (or actually
ruleutils.c) choosing to dump some implicit casts explicitly to ensure
that the expression is parsed the same way next time.  It might be
overly conservative to do so, but we've found that erring in this
direction tends to avoid breakage when the result is loaded into another
server version; it's a bit like the intentional overparenthesization.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore casts check constraints differently

2016-03-29 Thread Joshua Ma
This might not be a common case, but we're using pg_dump in a testing
environment to check migrations - 1) we initialize the db from HEAD,
pg_dump it, 2) we initialize the db from migration_base.sql, apply
migrations, pg_dump it, and 3) compare the two dumps to verify that our
migrations are correct wrt schema.

However, we're seeing pg_restore transforming our check constraints with
different casting.

# \d arrayed_library
CONSTRAINT arrayed_library_step_check CHECK (((step)::text = ANY
((ARRAY['ADD_RESERVED_SEQUENCES'::character varying,
'ANALYZE_DESIGN_WARNINGS'::character varying, 'COMPLETE_ORDER'::character
varying, 'DEFINE_VARIANTS'::character varying,
'LABEL_TRANSLATION'::character varying])::text[])))

$ dropdb db && createdb db
$ pg_dump db --schema-only --no-owner > migration_base.sql
# migration_base.sql has the same CONSTRAINT as above
$ psql db -q -f migration_base.sql

# \d arrayed_library
CONSTRAINT arrayed_library_step_check CHECK (((step)::text = ANY
(ARRAY[('ADD_RESERVED_SEQUENCES'::character varying)::text,
('ANALYZE_DESIGN_WARNINGS'::character varying)::text,
('COMPLETE_ORDER'::character varying)::text, ('DEFINE_VARIANTS'::character
varying)::text, ('LABEL_TRANSLATION'::character varying)::text])))

Note that the restored constraint has ARRAY('a'::text, 'b'::text, ...)
while the original had (ARRAY['a', 'b', ...])::text[]

Is there any way to have postgres NOT do the extra conversions?

-- 
- Josh


Re: [GENERAL] pg_restore fails

2016-03-13 Thread Francisco Olarte
Hi Karsten..

On Sun, Mar 13, 2016 at 12:09 AM, Karsten Hilbert
 wrote:
> I am trying to pg_restore from a directory dump.
> However, despite using
>
> --clean
> --create
> --if-exists
>
> I am getting an error because schema PUBLIC already exists.
snip, snip

Have you tried the classic combo pg_restore -l > toc.dat,
your_favorite_editor toc.dat pg_restore -L toc.dat?

I've had great success with that in the past, even splitting the TOC
in several chunks to be able to make adjustements between them, but
I've never used the directory format for ( serious, I've tried all
when learning ) backups.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Saturday, March 12, 2016, Karsten Hilbert 
wrote:

> On Sat, Mar 12, 2016 at 05:49:56PM -0700, David G. Johnston wrote:
>
> > I'd operate under the premise that all warnings and errors are fatal
> > (i.e., keep --exit-on-error) until you cannot for some very specific
> > reason.
>
> --exit-on-error will exit on _any_ perceived error,
> regardless of whether it could be ignored and the restore
> still succeed later on. Hence I cannot keep that option in
> use in order to implement the below.
>
> The unfortunate thing is that *any* restore will "fail"
> because the schema PUBLIC is copied from the template and
> that alone will produce an (ignorable) error...
>
>
So you make things so that error doesn't occur,  the work-arounds are
reasonably simple.

Using either clean or create alone succeeded without the public schema
error. It is only when you use both will it fail.  But both those
individual options have pre-reqs you need to ensure are met before calling
pg_restore.

David J.


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
On Sat, Mar 12, 2016 at 05:49:56PM -0700, David G. Johnston wrote:

> I'd operate under the premise that all warnings and errors are fatal
> (i.e., keep --exit-on-error) until you cannot for some very specific
> reason.

--exit-on-error will exit on _any_ perceived error,
regardless of whether it could be ignored and the restore
still succeed later on. Hence I cannot keep that option in
use in order to implement the below.

The unfortunate thing is that *any* restore will "fail"
because the schema PUBLIC is copied from the template and
that alone will produce an (ignorable) error...

> I'd decide how to proceed at that point.  For instance pg_restore
> does provide an ignored error count at the end - you could scan the log for
> expected errors, count them, and compare to that value and fail if the
> count differs.

That is a good idea.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Sat, Mar 12, 2016 at 5:43 PM, Karsten Hilbert 
wrote:

> On Sat, Mar 12, 2016 at 05:31:38PM -0700, David G. Johnston wrote:
>
> > > The reason being, of course, that I want to check the exit
> > > code in a pg_restore wrapper script.
> > >
> > >
> > I mistakenly thought public only came from template1...I wouldn't be
> > opposed to that change.  This all seems awfully familiar too...
> >
> > You probably should just drop the existing database and use --create by
> > itself.
> >
> > You can even use the dropdb command to avoid SQL in your script.
>
> I already do something similar: the wrapper fails if the
> target db exists before a restore is even attempted. The
> restore itself now uses --create and works as expected. The
> only thing left ATM is that I cannot distinguish
> success-with-or-without-ignored-errors from real failure.
>
> I _can_ partly work around that by attempting to connect to
> the target and checking the md5 sum of the schema definition
> against a known hash. That won't help with detecting whether
> pg_restore thought that _data_ was successfully restored ...
>
>
​I'd operate under the premise that all warnings and errors are fatal
(i.e., keep --exit-on-error) until you cannot for some very specific
reason.  I'd decide how to proceed at that point.  For instance pg_restore
does provide an ignored error count at the end - you could scan the log for
expected errors, count them, and compare to that value and fail if the
count differs.
​  But this particular warning should be easy to work around and you
shouldn't expect any others that would be considered non-critical.

David J.


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
On Sat, Mar 12, 2016 at 05:31:38PM -0700, David G. Johnston wrote:

> > The reason being, of course, that I want to check the exit
> > code in a pg_restore wrapper script.
> >
> >
> I mistakenly thought public only came from template1...I wouldn't be
> opposed to that change.  This all seems awfully familiar too...
> 
> You probably should just drop the existing database and use --create by
> itself.
> 
> You can even use the dropdb command to avoid SQL in your script.

I already do something similar: the wrapper fails if the
target db exists before a restore is even attempted. The
restore itself now uses --create and works as expected. The
only thing left ATM is that I cannot distinguish
success-with-or-without-ignored-errors from real failure.

I _can_ partly work around that by attempting to connect to
the target and checking the md5 sum of the schema definition
against a known hash. That won't help with detecting whether
pg_restore thought that _data_ was successfully restored ...

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Sat, Mar 12, 2016 at 5:31 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You probably should just drop the existing database and use --create by
> itself.
>
> You can even use the dropdb command to avoid SQL in your script.
>
>
​This seems like it is the main problem:

# dropdb postgres
# pg_restore --create 

>No problems​


​ # pg_restore --clean --create 

​>public schema already exists

So both --clean and --create are attempting to create the database.

So in the example you can either use:

# pg_restore --clean -d postgres
or
# pg_restore --create -d template1

But with the later you have to "dropdb" first - if the target database
already exists

With the former you have to "createdb" first - if the target database
doesn't already exist.

I agree the that exit code situation should be enhanced as well.

David J.


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Saturday, March 12, 2016, Karsten Hilbert 
wrote:

> On Sat, Mar 12, 2016 at 04:53:20PM -0700, David G. Johnston wrote:
>
> > The docs could probably use improvement here - though I am inferring
> > behavior from description and not code.
> >
> > The create option tells restore that it is pointless to use conditions or
> > actively drop objects since the newly created database is expected to be
> > empty.  The --clean option will cause pg_restore to drop the database if
> it
> > exists but only the database.  The --if-exists option would seem to be
> > extraneous.
> >
> > The clean option with create seems to be misleading since the advice
> later
> > in the document is to ensure the created database is empty by using
> > template0 - which you cannot specify directly within pg_restore and so
> > createdb or an equivalent command should be used to stage up the empty
> > database before performing a simple (no create or clean) restore.
> >
> > I'm not certain why the create database command constructed when
> specifying
> > --create isn't just defaulted to template0...and for completeness a
> > --template option added for user template specification
>
> The thing is, even when defaulting --create to template0 it
> would contain a copy of the PUBLIC schema from template0,
> which is then attempted to be restored from the dump, if
> included.
>
> As Adrian pointed out, that's not a problem as the restore
> continues anyway (which I was able to confirm).
>
> However, pg_restore.c seems to suggest
>
> 420  /* done, print a summary of ignored errors */
> 421  if (AH->n_errors)
> 422  fprintf(stderr, _("WARNING: errors ignored on restore: %d\n"),
> 423  AH->n_errors);
> 424
> 425  /* AH may be freed in CloseArchive? */
> 426  exit_code = AH->n_errors ? 1 : 0;
> 427
> 428  CloseArchive(AH);
>
> that the exit code is set to 1 if any errors ensued (but were
> ignored). Thusly the restore may have succeeded semantically
> but is still flagged as (technically) failed. That wouldn't
> be a problem if the condition
>
> really-fully-failed
>
> could be differentiated from
>
> technical-failure-but-ignored-and-semantically-succeeded
>
> at the exit code level since the latter outcome can be
> expected to happen under the circumstances described above.
>
> Am I thinking the wrong way ?
>
> The reason being, of course, that I want to check the exit
> code in a pg_restore wrapper script.
>
>
I mistakenly thought public only came from template1...I wouldn't be
opposed to that change.  This all seems awfully familiar too...

You probably should just drop the existing database and use --create by
itself.

You can even use the dropdb command to avoid SQL in your script.

David J,


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
On Sat, Mar 12, 2016 at 04:53:20PM -0700, David G. Johnston wrote:

> The docs could probably use improvement here - though I am inferring
> behavior from description and not code.
> 
> The create option tells restore that it is pointless to use conditions or
> actively drop objects since the newly created database is expected to be
> empty.  The --clean option will cause pg_restore to drop the database if it
> exists but only the database.  The --if-exists option would seem to be
> extraneous.
> 
> The clean option with create seems to be misleading since the advice later
> in the document is to ensure the created database is empty by using
> template0 - which you cannot specify directly within pg_restore and so
> createdb or an equivalent command should be used to stage up the empty
> database before performing a simple (no create or clean) restore.
> 
> I'm not certain why the create database command constructed when specifying
> --create isn't just defaulted to template0...and for completeness a
> --template option added for user template specification

The thing is, even when defaulting --create to template0 it
would contain a copy of the PUBLIC schema from template0,
which is then attempted to be restored from the dump, if
included.

As Adrian pointed out, that's not a problem as the restore
continues anyway (which I was able to confirm).

However, pg_restore.c seems to suggest

420  /* done, print a summary of ignored errors */
421  if (AH->n_errors)
422  fprintf(stderr, _("WARNING: errors ignored on restore: %d\n"),
423  AH->n_errors);
424
425  /* AH may be freed in CloseArchive? */
426  exit_code = AH->n_errors ? 1 : 0;
427
428  CloseArchive(AH);

that the exit code is set to 1 if any errors ensued (but were
ignored). Thusly the restore may have succeeded semantically
but is still flagged as (technically) failed. That wouldn't
be a problem if the condition

really-fully-failed

could be differentiated from

technical-failure-but-ignored-and-semantically-succeeded

at the exit code level since the latter outcome can be
expected to happen under the circumstances described above.

Am I thinking the wrong way ?

The reason being, of course, that I want to check the exit
code in a pg_restore wrapper script.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Sat, Mar 12, 2016 at 4:32 PM, Adrian Klaver 
wrote:

> On 03/12/2016 03:09 PM, Karsten Hilbert wrote:
>
>> Hi,
>>
>> Debian Stretch
>> PG 9.5.1
>>
>> I am trying to pg_restore from a directory dump.
>>
>> However, despite using
>>
>> --clean
>> --create
>> --if-exists
>>
>> I am getting an error because schema PUBLIC already exists.
>>
>> That schema is, indeed, included in the dump to be restored
>> and also cannot be omitted from either the dump or the
>> restore because it still contains a few relevant things which
>> I haven't yet moved to their own app specific schema.
>>
>> I am assuming (wrongly ?) that pg_restore uses template1 to
>> re-create the target database. I had to re-create template1
>> today from template0 (as is suggested) because I erroneously
>> added a few tables to template1 earlier. So, the newly
>> created target DB will, indeed, contain a schema PUBLIC
>> initially.
>>
>> That should not (?) matter however, because of the above
>> options which I would have expected to drop the schema before
>> (re)creating it (--clean).
>>
>> Here is the log:
>>
>> sudo -u postgres pg_restore --verbose --create --clean
>> --if-exists --exit-on-error --disable-triggers --dbname=template1 -p 5432
>> /tmp/gnumed/gm-restore_2016-03-12_23-58-05/backup-gnumed_v20-GNUmed_Team-hermes-2016-03-07-21-15-06.dir/
>> pg_restore: verbinde mit der Datenbank zur Wiederherstellung
>> pg_restore: entferne DATABASE gnumed_v20
>> pg_restore: erstelle DATABASE „gnumed_v20“
>> pg_restore: verbinde mit neuer Datenbank „gnumed_v20“
>> pg_restore: verbinde mit Datenbank „gnumed_v20“ als Benutzer
>> „postgres“
>> pg_restore: erstelle SCHEMA „au“
>> pg_restore: erstelle SCHEMA „audit“
>> pg_restore: erstelle SCHEMA „bill“
>> pg_restore: erstelle COMMENT „SCHEMA bill“
>> pg_restore: erstelle SCHEMA „blobs“
>> pg_restore: erstelle SCHEMA „cfg“
>> pg_restore: erstelle COMMENT „SCHEMA cfg“
>> pg_restore: erstelle SCHEMA „clin“
>> pg_restore: erstelle SCHEMA „de_de“
>> pg_restore: erstelle SCHEMA „dem“
>> pg_restore: erstelle SCHEMA „gm“
>> pg_restore: erstelle SCHEMA „i18n“
>> pg_restore: erstelle SCHEMA „pgtrgm“
>> pg_restore: erstelle SCHEMA „public“
>> pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
>> pg_restore: [Archivierer (DB)] Fehler in
>> Inhaltsverzeichniseintrag 8; 2615 2200 SCHEMA public postgres
>> pg_restore: [Archivierer (DB)] could not execute query: FEHLER:
>> Schema „public“ existiert bereits
>> Die Anweisung war: CREATE SCHEMA public;
>>
>> I am sure I am doing something wrong, but what ?
>>
>
> Did it actually fail or did it just throw an error?
> In other words did the restore continue past the error?
>

My other post is more detailed in why (and how) this should (could) be
improved.  As to this point it doesn't "actually fail" insofar as there is
no actual harm done as the schema cloned from template1 is empty and so the
failure during the attempt to create it - by definition in an empty state -
is insubstantial when the concern is whether the source and result
databases have the same schema.  But it is substantial insofar as it
reports an error that doesn't have to happen and that is "a failure" if one
chooses, quite rightly, to "exit-on-error"

David J.


Re: [GENERAL] pg_restore fails

2016-03-12 Thread David G. Johnston
On Saturday, March 12, 2016, Karsten Hilbert 
wrote:

> Hi,
>
> Debian Stretch
> PG 9.5.1
>
> I am trying to pg_restore from a directory dump.
>
> However, despite using
>
> --clean
> --create
> --if-exists
>
> I am getting an error because schema PUBLIC already exists.
>
> That schema is, indeed, included in the dump to be restored
> and also cannot be omitted from either the dump or the
> restore because it still contains a few relevant things which
> I haven't yet moved to their own app specific schema.
>
> I am assuming (wrongly ?) that pg_restore uses template1 to
> re-create the target database. I had to re-create template1
> today from template0 (as is suggested) because I erroneously
> added a few tables to template1 earlier. So, the newly
> created target DB will, indeed, contain a schema PUBLIC
> initially.
>
> That should not (?) matter however, because of the above
> options which I would have expected to drop the schema before
> (re)creating it (--clean).
>
> Here is the log:
>
> sudo -u postgres pg_restore --verbose --create --clean --if-exists
> --exit-on-error --disable-triggers --dbname=template1 -p 5432
> /tmp/gnumed/gm-restore_2016-03-12_23-58-05/backup-gnumed_v20-GNUmed_Team-hermes-2016-03-07-21-15-06.dir/
> pg_restore: verbinde mit der Datenbank zur Wiederherstellung
> pg_restore: entferne DATABASE gnumed_v20
> pg_restore: erstelle DATABASE „gnumed_v20“
> pg_restore: verbinde mit neuer Datenbank „gnumed_v20“
> pg_restore: verbinde mit Datenbank „gnumed_v20“ als Benutzer
> „postgres“
> pg_restore: erstelle SCHEMA „au“
> pg_restore: erstelle SCHEMA „audit“
> pg_restore: erstelle SCHEMA „bill“
> pg_restore: erstelle COMMENT „SCHEMA bill“
> pg_restore: erstelle SCHEMA „blobs“
> pg_restore: erstelle SCHEMA „cfg“
> pg_restore: erstelle COMMENT „SCHEMA cfg“
> pg_restore: erstelle SCHEMA „clin“
> pg_restore: erstelle SCHEMA „de_de“
> pg_restore: erstelle SCHEMA „dem“
> pg_restore: erstelle SCHEMA „gm“
> pg_restore: erstelle SCHEMA „i18n“
> pg_restore: erstelle SCHEMA „pgtrgm“
> pg_restore: erstelle SCHEMA „public“
> pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
> pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag
> 8; 2615 2200 SCHEMA public postgres
> pg_restore: [Archivierer (DB)] could not execute query: FEHLER:
> Schema „public“ existiert bereits
> Die Anweisung war: CREATE SCHEMA public;
>
> I am sure I am doing something wrong, but what ?
>
>
>
The docs could probably use improvement here - though I am inferring
behavior from description and not code.

The create option tells restore that it is pointless to use conditions or
actively drop objects since the newly created database is expected to be
empty.  The --clean option will cause pg_restore to drop the database if it
exists but only the database.  The --if-exists option would seem to be
extraneous.

The clean option with create seems to be misleading since the advice later
in the document is to ensure the created database is empty by using
template0 - which you cannot specify directly within pg_restore and so
createdb or an equivalent command should be used to stage up the empty
database before performing a simple (no create or clean) restore.

I'm not certain why the create database command constructed when specifying
--create isn't just defaulted to template0...and for completeness a
--template option added for user template specification

David J.


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
On Sat, Mar 12, 2016 at 03:32:15PM -0800, Adrian Klaver wrote:

> > pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
> > pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8; 
> > 2615 2200 SCHEMA public postgres
> > pg_restore: [Archivierer (DB)] could not execute query: FEHLER:  Schema 
> > „public“ existiert bereits
> > Die Anweisung war: CREATE SCHEMA public;
> >
> >I am sure I am doing something wrong, but what ?
> 
> Did it actually fail or did it just throw an error?
> In other words did the restore continue past the error?

Good question. I'll remove the --exit-on-error and retry :-)

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Adrian Klaver

On 03/12/2016 03:09 PM, Karsten Hilbert wrote:

Hi,

Debian Stretch
PG 9.5.1

I am trying to pg_restore from a directory dump.

However, despite using

--clean
--create
--if-exists

I am getting an error because schema PUBLIC already exists.

That schema is, indeed, included in the dump to be restored
and also cannot be omitted from either the dump or the
restore because it still contains a few relevant things which
I haven't yet moved to their own app specific schema.

I am assuming (wrongly ?) that pg_restore uses template1 to
re-create the target database. I had to re-create template1
today from template0 (as is suggested) because I erroneously
added a few tables to template1 earlier. So, the newly
created target DB will, indeed, contain a schema PUBLIC
initially.

That should not (?) matter however, because of the above
options which I would have expected to drop the schema before
(re)creating it (--clean).

Here is the log:

sudo -u postgres pg_restore --verbose --create --clean --if-exists 
--exit-on-error --disable-triggers --dbname=template1 -p 5432 
/tmp/gnumed/gm-restore_2016-03-12_23-58-05/backup-gnumed_v20-GNUmed_Team-hermes-2016-03-07-21-15-06.dir/
pg_restore: verbinde mit der Datenbank zur Wiederherstellung
pg_restore: entferne DATABASE gnumed_v20
pg_restore: erstelle DATABASE „gnumed_v20“
pg_restore: verbinde mit neuer Datenbank „gnumed_v20“
pg_restore: verbinde mit Datenbank „gnumed_v20“ als Benutzer „postgres“
pg_restore: erstelle SCHEMA „au“
pg_restore: erstelle SCHEMA „audit“
pg_restore: erstelle SCHEMA „bill“
pg_restore: erstelle COMMENT „SCHEMA bill“
pg_restore: erstelle SCHEMA „blobs“
pg_restore: erstelle SCHEMA „cfg“
pg_restore: erstelle COMMENT „SCHEMA cfg“
pg_restore: erstelle SCHEMA „clin“
pg_restore: erstelle SCHEMA „de_de“
pg_restore: erstelle SCHEMA „dem“
pg_restore: erstelle SCHEMA „gm“
pg_restore: erstelle SCHEMA „i18n“
pg_restore: erstelle SCHEMA „pgtrgm“
pg_restore: erstelle SCHEMA „public“
pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8; 
2615 2200 SCHEMA public postgres
pg_restore: [Archivierer (DB)] could not execute query: FEHLER:  Schema 
„public“ existiert bereits
Die Anweisung war: CREATE SCHEMA public;

I am sure I am doing something wrong, but what ?


Did it actually fail or did it just throw an error?
In other words did the restore continue past the error?





Thanks,
Karsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
On Sun, Mar 13, 2016 at 12:09:19AM +0100, Karsten Hilbert wrote:

In case it is needed:

>   pg_restore: erstelle SCHEMA „public“

creating SCHEMA "public"

>   pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:

Error in Phase ...

>   pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8; 
> 2615 2200 SCHEMA public postgres

Error in TOC entry 8 ...

>   pg_restore: [Archivierer (DB)] could not execute query: FEHLER:  Schema 
> „public“ existiert bereits

ERROR: Schema "public" already exists

>   Die Anweisung war: CREATE SCHEMA public;

The command was: CREATE ...

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore fails

2016-03-12 Thread Karsten Hilbert
Hi,

Debian Stretch
PG 9.5.1

I am trying to pg_restore from a directory dump.

However, despite using

--clean
--create
--if-exists

I am getting an error because schema PUBLIC already exists.

That schema is, indeed, included in the dump to be restored
and also cannot be omitted from either the dump or the
restore because it still contains a few relevant things which
I haven't yet moved to their own app specific schema.

I am assuming (wrongly ?) that pg_restore uses template1 to
re-create the target database. I had to re-create template1
today from template0 (as is suggested) because I erroneously
added a few tables to template1 earlier. So, the newly
created target DB will, indeed, contain a schema PUBLIC
initially.

That should not (?) matter however, because of the above
options which I would have expected to drop the schema before
(re)creating it (--clean).

Here is the log:

sudo -u postgres pg_restore --verbose --create --clean --if-exists 
--exit-on-error --disable-triggers --dbname=template1 -p 5432 
/tmp/gnumed/gm-restore_2016-03-12_23-58-05/backup-gnumed_v20-GNUmed_Team-hermes-2016-03-07-21-15-06.dir/
pg_restore: verbinde mit der Datenbank zur Wiederherstellung
pg_restore: entferne DATABASE gnumed_v20
pg_restore: erstelle DATABASE „gnumed_v20“
pg_restore: verbinde mit neuer Datenbank „gnumed_v20“
pg_restore: verbinde mit Datenbank „gnumed_v20“ als Benutzer „postgres“
pg_restore: erstelle SCHEMA „au“
pg_restore: erstelle SCHEMA „audit“
pg_restore: erstelle SCHEMA „bill“
pg_restore: erstelle COMMENT „SCHEMA bill“
pg_restore: erstelle SCHEMA „blobs“
pg_restore: erstelle SCHEMA „cfg“
pg_restore: erstelle COMMENT „SCHEMA cfg“
pg_restore: erstelle SCHEMA „clin“
pg_restore: erstelle SCHEMA „de_de“
pg_restore: erstelle SCHEMA „dem“
pg_restore: erstelle SCHEMA „gm“
pg_restore: erstelle SCHEMA „i18n“
pg_restore: erstelle SCHEMA „pgtrgm“
pg_restore: erstelle SCHEMA „public“
pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8; 
2615 2200 SCHEMA public postgres
pg_restore: [Archivierer (DB)] could not execute query: FEHLER:  Schema 
„public“ existiert bereits
Die Anweisung war: CREATE SCHEMA public;

I am sure I am doing something wrong, but what ?

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore without dropping db/table

2016-03-10 Thread Karsten Hilbert
On Thu, Mar 10, 2016 at 03:59:58PM -0500, Melvin Davidson wrote:

> fyi, since the version of PostgreSQL was NOT stated (or O/S) as is the
> proper thing to do when posting, I gave a generic solution which covers all
> versions and O/S's

That's an important point.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore without dropping db/table

2016-03-10 Thread Melvin Davidson
fyi, since the version of PostgreSQL was NOT stated (or O/S) as is the
proper thing to do when posting, I gave a generic solution which covers all
versions and O/S's

On Thu, Mar 10, 2016 at 3:53 PM, Karsten Hilbert 
wrote:

> On Thu, Mar 10, 2016 at 01:49:42PM -0500, Melvin Davidson wrote:
>
> > The best way to accomplish what you want is to create a table with the
> same
> > structure in the first database as the one you want to restore to. Then
> you
> > can truncate that table, restore the data from the other db into it, and
> > use that to add the new rows to your table.
> > eg:
> > 1. You have your original table:
> >CREATE TABLE orig_table
> >(prime_key varchar(10) ,
> > data_col1 integer,
> > data_col2 varchar(5),
> > CONSTRAINT orig_table_pk PRIMARY KEY (prime_key)
> >);
> >  2. Duplicate table:
> >CREATE TABLE dup_table
> >(prime_key varchar(10) ,
> > data_col1 integer,
> > data_col2 varchar(5),
> > CONSTRAINT dup_table_pk PRIMARY KEY (prime_key)
> >);
>
> This could benefit from
>
> create table [...] like orig_table excluding all ...
>
> >  8. INSERT INTO orig_table
> > SELECT * FROM dup_table
> > WHERE dup.prime_key NOT IN (SELECT prime_key FROM orig_table);
>
> This will work if
>
> dup.prime_key NOT IN (SELECT prime_key FROM orig_table)
>
> identifies "new" rows. This probably has the highest chance
> of being true if prime_key is a natural key.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_restore without dropping db/table

2016-03-10 Thread Karsten Hilbert
On Thu, Mar 10, 2016 at 01:49:42PM -0500, Melvin Davidson wrote:

> The best way to accomplish what you want is to create a table with the same
> structure in the first database as the one you want to restore to. Then you
> can truncate that table, restore the data from the other db into it, and
> use that to add the new rows to your table.
> eg:
> 1. You have your original table:
>CREATE TABLE orig_table
>(prime_key varchar(10) ,
> data_col1 integer,
> data_col2 varchar(5),
> CONSTRAINT orig_table_pk PRIMARY KEY (prime_key)
>);
>  2. Duplicate table:
>CREATE TABLE dup_table
>(prime_key varchar(10) ,
> data_col1 integer,
> data_col2 varchar(5),
> CONSTRAINT dup_table_pk PRIMARY KEY (prime_key)
>);

This could benefit from

create table [...] like orig_table excluding all ...

>  8. INSERT INTO orig_table
> SELECT * FROM dup_table
> WHERE dup.prime_key NOT IN (SELECT prime_key FROM orig_table);

This will work if

dup.prime_key NOT IN (SELECT prime_key FROM orig_table)

identifies "new" rows. This probably has the highest chance
of being true if prime_key is a natural key.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore without dropping db/table

2016-03-10 Thread Melvin Davidson
On Thu, Mar 10, 2016 at 12:53 PM, Adrian Klaver 
wrote:

> On 03/10/2016 09:41 AM, Karsten Hilbert wrote:
>
>> On Thu, Mar 10, 2016 at 10:51:05AM -0500, anj patnaik wrote:
>>
>> Does pg_restore only add new rows if I restore without deleting old db?
>>>
>>
>> No. For one thing, pg_restore cannot know what you consider
>> to be a "new row".
>>
>
> If you however do know what is new is then you might want to look at:
>
> COPY
> http://www.postgresql.org/docs/9.5/interactive/sql-copy.html
> <>
> "COPY with a file name instructs the PostgreSQL server to directly read
> from or write to a file. The file must be accessible by the PostgreSQL user
> (the user ID the server runs as) and the name must be specified from the
> viewpoint of the server. When PROGRAM is specified, the server executes the
> given command and reads from the standard output of the program, or writes
> to the standard input of the program. The command must be specified from
> the viewpoint of the server, and be executable by the PostgreSQL user. When
> STDIN or STDOUT is specified, data is transmitted via the connection
> between the client and the server."
>
> Or you can use the psql variant \copy
>
> http://www.postgresql.org/docs/9.5/interactive/app-psql.html
>
> "Performs a frontend (client) copy. This is an operation that runs an SQL
> COPY command, but instead of the server reading or writing the specified
> file, psql reads or writes the file and routes the data between the server
> and the local file system. This means that file accessibility and
> privileges are those of the local user, not the server, and no SQL
> superuser privileges are required."
>
>
> In either case you can craft a query to limit the rows copied. The issue
> is that this assumes the both databases are static over the time you do
> this.
>
> Your other options are to use replication:
>
> http://www.postgresql.org/docs/9.5/interactive/high-availability.html
>
> or FDW
>
> https://wiki.postgresql.org/wiki/Foreign_data_wrapper
>
>
>> Best,
>> Karsten
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


The best way to accomplish what you want is to create a table with the same
structure in the first database as the one you want to restore to. Then you
can truncate that table, restore the data from the other db into it, and
use that to add the new rows to your table.
eg:
1. You have your original table:
   CREATE TABLE orig_table
   (prime_key varchar(10) ,
data_col1 integer,
data_col2 varchar(5),
CONSTRAINT orig_table_pk PRIMARY KEY (prime_key)
   );
 2. Duplicate table:
   CREATE TABLE dup_table
   (prime_key varchar(10) ,
data_col1 integer,
data_col2 varchar(5),
CONSTRAINT dup_table_pk PRIMARY KEY (prime_key)
   );
 3. pg_dump --format=custom -a -t orig_table second_db > your_dump.dmp
 4. psql -d first_db -c truncate dup_table
 5. edit your_dump.dmp and change all occurrences of orig_table to
dup_table.
 6. pg_restore -a -d first_db -t dup_table your_dump.dmp
 7. psql -d first_db
 8. INSERT INTO orig_table
SELECT * FROM dup_table
WHERE dup.prime_key NOT IN (SELECT prime_key FROM orig_table);

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_restore without dropping db/table

2016-03-10 Thread Adrian Klaver

On 03/10/2016 09:41 AM, Karsten Hilbert wrote:

On Thu, Mar 10, 2016 at 10:51:05AM -0500, anj patnaik wrote:


Does pg_restore only add new rows if I restore without deleting old db?


No. For one thing, pg_restore cannot know what you consider
to be a "new row".


If you however do know what is new is then you might want to look at:

COPY
http://www.postgresql.org/docs/9.5/interactive/sql-copy.html
<>
"COPY with a file name instructs the PostgreSQL server to directly read 
from or write to a file. The file must be accessible by the PostgreSQL 
user (the user ID the server runs as) and the name must be specified 
from the viewpoint of the server. When PROGRAM is specified, the server 
executes the given command and reads from the standard output of the 
program, or writes to the standard input of the program. The command 
must be specified from the viewpoint of the server, and be executable by 
the PostgreSQL user. When STDIN or STDOUT is specified, data is 
transmitted via the connection between the client and the server."


Or you can use the psql variant \copy

http://www.postgresql.org/docs/9.5/interactive/app-psql.html

"Performs a frontend (client) copy. This is an operation that runs an 
SQL COPY command, but instead of the server reading or writing the 
specified file, psql reads or writes the file and routes the data 
between the server and the local file system. This means that file 
accessibility and privileges are those of the local user, not the 
server, and no SQL superuser privileges are required."



In either case you can craft a query to limit the rows copied. The issue 
is that this assumes the both databases are static over the time you do 
this.


Your other options are to use replication:

http://www.postgresql.org/docs/9.5/interactive/high-availability.html

or FDW

https://wiki.postgresql.org/wiki/Foreign_data_wrapper



Best,
Karsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   >