Re: [GENERAL] Incremental / Level -1 backup in PG

2017-03-21 Thread Stephen Frost
John,

* John R Pierce (pie...@hogranch.com) wrote:
> On 3/21/2017 5:27 PM, Rakesh Kumar wrote:
> >PG does not have a concept of incremental  backup. The way it works in 
> >Oracle and other RDBMS is that incremental backup only backups up changed 
> >blocks since the last full backup.  So if only 10% of blocks changed since 
> >the last full backup, incremental backup will be only for 10%.
> >I am wondering whether it is technically feasible to implement it like this:
> >
> >1 - At the time of full backup, note the last modified time of each data 
> >file in a repository.
> >2 - Next time when incremental backup runs, for every data file it will 
> >check the last modified time of it with the one in the repository to 
> >determine whether it has changed since last full backup. If yes, back it up.
> >
> >Now on to restore:
> >
> >1 - First restore full backup.
> >2 - Restore incremental backup.
> >
> >My question: Will it work in PG?
> 
> basebackup + WAL archive lets you do just exactly this.   you can
> restore to any transaction between when that basebackup was taken,
> and the latest entry in the WAL archive, its referred in the
> documentation as PITR, Point in Time Recovery.

WAL must always be kept for file-level backups, of course, but it does
not allow the kind of incremental backup the OP was suggesting.

It's important to understand that you might start reading a file whose
timestamp is X, read half of it, and then PG starts writing to the first
half of the file, and you finish reading the file, all within the same
second.

A later incremental backup might assume that file hadn't been changed
from the version you have and therefore not back it up.  The WAL for the
change which was written by PG would be in the first 'full' backup, but
would not be included in the WAL which is generated during the
incremental backup, leading to a situation where that write would be
lost and you have a corrupted backup.

Do not try to implement an incremental backup solution using
simple/naive tools like rsync with timestamp-based incrementals.  It is
not safe.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Incremental / Level -1 backup in PG

2017-03-21 Thread Stephen Frost
Greetings,

* Rakesh Kumar (rakeshkumar...@outlook.com) wrote:
> PG does not have a concept of incremental  backup. The way it works in Oracle 
> and other RDBMS is that incremental backup only backups up changed blocks 
> since the last full backup.  So if only 10% of blocks changed since the last 
> full backup, incremental backup will be only for 10%.
> I am wondering whether it is technically feasible to implement it like this:
> 
> 1 - At the time of full backup, note the last modified time of each data file 
> in a repository.
> 2 - Next time when incremental backup runs, for every data file it will check 
> the last modified time of it with the one in the repository to determine 
> whether it has changed since last full backup. If yes, back it up.
> 
> Now on to restore:
> 
> 1 - First restore full backup.
> 2 - Restore incremental backup.  
> 
> My question: Will it work in PG?

The short answer is 'no'.  There are complications around this,
particularly at the edges and because files can be written and rewritten
as you're reading them.  Basically, no file with a timestamp after the
checkpoint before the backup can be omitted from an incremental backup.

I strongly recommend you use one of the existing backup solutions for
PostgreSQL which know how to properly perform incremental backups.  I
know at least pgBackrest and barman do, I'm not sure about others.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Incremental / Level -1 backup in PG

2017-03-21 Thread John R Pierce

On 3/21/2017 5:27 PM, Rakesh Kumar wrote:

PG does not have a concept of incremental  backup. The way it works in Oracle 
and other RDBMS is that incremental backup only backups up changed blocks since 
the last full backup.  So if only 10% of blocks changed since the last full 
backup, incremental backup will be only for 10%.
I am wondering whether it is technically feasible to implement it like this:

1 - At the time of full backup, note the last modified time of each data file 
in a repository.
2 - Next time when incremental backup runs, for every data file it will check 
the last modified time of it with the one in the repository to determine 
whether it has changed since last full backup. If yes, back it up.

Now on to restore:

1 - First restore full backup.
2 - Restore incremental backup.

My question: Will it work in PG?


basebackup + WAL archive lets you do just exactly this.   you can 
restore to any transaction between when that basebackup was taken, and 
the latest entry in the WAL archive, its referred in the documentation 
as PITR, Point in Time Recovery.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental / Level -1 backup in PG

2017-03-21 Thread Adrian Klaver

On 03/21/2017 05:27 PM, Rakesh Kumar wrote:

PG does not have a concept of incremental  backup. The way it works in Oracle 
and other RDBMS is that incremental backup only backups up changed blocks since 
the last full backup.  So if only 10% of blocks changed since the last full 
backup, incremental backup will be only for 10%.
I am wondering whether it is technically feasible to implement it like this:

1 - At the time of full backup, note the last modified time of each data file 
in a repository.
2 - Next time when incremental backup runs, for every data file it will check 
the last modified time of it with the one in the repository to determine 
whether it has changed since last full backup. If yes, back it up.

Now on to restore:

1 - First restore full backup.
2 - Restore incremental backup.

My question: Will it work in PG?


?:
https://www.postgresql.org/docs/9.6/static/continuous-archiving.html

https://www.postgresql.org/docs/9.6/static/warm-standby.html#STREAMING-REPLICATION



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Incremental / Level -1 backup in PG

2017-03-21 Thread Rakesh Kumar
PG does not have a concept of incremental  backup. The way it works in Oracle 
and other RDBMS is that incremental backup only backups up changed blocks since 
the last full backup.  So if only 10% of blocks changed since the last full 
backup, incremental backup will be only for 10%.
I am wondering whether it is technically feasible to implement it like this:

1 - At the time of full backup, note the last modified time of each data file 
in a repository.
2 - Next time when incremental backup runs, for every data file it will check 
the last modified time of it with the one in the repository to determine 
whether it has changed since last full backup. If yes, back it up.

Now on to restore:

1 - First restore full backup.
2 - Restore incremental backup.  

My question: Will it work in PG?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Adrian Klaver

On 03/21/2017 03:03 PM, Tom Ekberg wrote:

I have been working with MySQL a bit (yes, I know, heresy) and
encountered a program called mysql_config_editor. In my opinion it does
a better job of local password management than using a ~/.pgpass file.
Instead of assuming that a mode of 600 will keep people from peeking at
your password, it encrypts the password, but keeps the other parameters
like host, port and user available for viewing as plaintext. You can
read more about it here:

  https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html

The host, user, password values are grouped into what are called login
paths which are of the form:

  [some_login_path]
  host = localhost
  user = localuser

Just like the config files you have no doubt seen before. The only way
to set a password is to use the command:

  mysql_config_editor set --login-path=some_login_path --password

which will prompt the user to enter the password for the specified login
path. The password is never seen as plain text. There are other commands
to set, remove, print and reset values for a login path. The print
command that shows a password will display this instead:

  password = *

Adding a similar feature for PostgreSQL will also require a change to
the psql program to specify and handle --login-path used for
authentication. This may also be the case for some of the other pg_*
utilities.


Something like this?:

https://www.postgresql.org/docs/9.6/static/libpq-pgservice.html

with:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ba005f193d88a8404e81db3df223cf689d64d75e

https://www.postgresql.org/docs/devel/static/libpq-connect.html#libpq-connect-passfile

The only thing lacking is that the passwords are not actually encrypted 
in the file. Though there are other methods available then the md5 
password authentication:


https://www.postgresql.org/docs/devel/static/auth-methods.html



I think adding a feature like mysql_config_editor to PostgreSQL is an
easy way to set up multiple "personalities" for connecting to different
PostgreSQL servers. The password protection will deter the curious user
from gaining access to your data. It will not stop a determined hacker,
but the idea is to make it more difficult.

Other than this mailing list, is there a way to make a feature request
for PostgreSQL?

Tom Ekberg
Senior Computer Specialist, Lab Medicine
University of Washington Medical Center
1959 NE Pacific St, MS 357110
Seattle WA 98195
work: (206) 598-8544
email: tekb...@uw.edu







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The same query is too slow in some time of execution

2017-03-21 Thread Adrian Klaver

On 03/21/2017 04:27 PM, DrakoRod wrote:

Hi folks!

I've a query with a join of two tables. One table have a 5 millions rows and
child table have a 17 millions rows. The query is executed many times in
application, every 20 seconds aproximately.

The query normally execute in 2-3 seconds but in some time without apparent
pattern the query is hang to 4-6 minutes is too slow to normally
performance.

I've configured 2 things:

1. Each table have indexes. First table have 11 index and second table have
7

2. I configured the VACUUM and ANALYZE run after 20,000 rows inserted.

But apparantly the problem continues


Well without seeing the query, the EXPLAIN ANALYZE of the query, the 
table schema(including indexes), the Postgres version, Postgres log info 
from the 'problem' times, system resource values and so on, there is 
really no way to answer this.


For a longer version of the above see:

https://wiki.postgresql.org/wiki/Guide_to_reporting_problems



Best Regards!
DRakoROd



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions
--
View this message in context: 
http://www.postgresql-archive.org/The-same-query-is-too-slow-in-some-time-of-execution-tp5951060.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] The same query is too slow in some time of execution

2017-03-21 Thread DrakoRod
Hi folks! 

I've a query with a join of two tables. One table have a 5 millions rows and
child table have a 17 millions rows. The query is executed many times in
application, every 20 seconds aproximately.

The query normally execute in 2-3 seconds but in some time without apparent
pattern the query is hang to 4-6 minutes is too slow to normally
performance. 

I've configured 2 things: 

1. Each table have indexes. First table have 11 index and second table have
7

2. I configured the VACUUM and ANALYZE run after 20,000 rows inserted. 

But apparantly the problem continues 

Best Regards!
DRakoROd



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://www.postgresql-archive.org/The-same-query-is-too-slow-in-some-time-of-execution-tp5951060.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Steve Atkins

> On Mar 21, 2017, at 3:03 PM, Tom Ekberg  wrote:
> 
> I have been working with MySQL a bit (yes, I know, heresy) and encountered a 
> program called mysql_config_editor. In my opinion it does a better job of 
> local password management than using a ~/.pgpass file. Instead of assuming 
> that a mode of 600 will keep people from peeking at your password, it 
> encrypts the password, but keeps the other parameters like host, port and 
> user available for viewing as plaintext. You can read more about it here:
> 
>  https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html
> 
> The host, user, password values are grouped into what are called login paths 
> which are of the form:
> 
>  [some_login_path]
>  host = localhost
>  user = localuser

Looks rather like a postgresql service file. :)

> 
> Just like the config files you have no doubt seen before. The only way to set 
> a password is to use the command:
> 
>  mysql_config_editor set --login-path=some_login_path --password
> 
> which will prompt the user to enter the password for the specified login 
> path. The password is never seen as plain text. There are other commands to 
> set, remove, print and reset values for a login path. The print command that 
> shows a password will display this instead:
> 
>  password = *

This seems like it'd give people a false sense of security. If someone can read 
that file, they can log in to that account. Obfuscating the password just makes 
naive users think they're secure when they're anything but, and means they're 
less likely to be careful about making that file unreadable and avoiding 
checking it into revision control and so on. It'd protect against 
shoulder-surfing, but it's not like you're going to have .pg_pass open in an 
editor too often.

A commandline tool for managing pgpass might be interesting, I guess. Though 
for local databases using peer authentication is likely better than saving 
passwords in a file.

> Adding a similar feature for PostgreSQL will also require a change to the 
> psql program to specify and handle --login-path used for authentication. This 
> may also be the case for some of the other pg_* utilities.
> 
> I think adding a feature like mysql_config_editor to PostgreSQL is an easy 
> way to set up multiple "personalities" for connecting to different PostgreSQL 
> servers. The password protection will deter the curious user from gaining 
> access to your data. It will not stop a determined hacker, but the idea is to 
> make it more difficult.
> 
> Other than this mailing list, is there a way to make a feature request for 
> PostgreSQL?

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Guillaume Lelarge
2017-03-21 23:03 GMT+01:00 Tom Ekberg :

> I have been working with MySQL a bit (yes, I know, heresy) and encountered
> a program called mysql_config_editor. In my opinion it does a better job of
> local password management than using a ~/.pgpass file. Instead of assuming
> that a mode of 600 will keep people from peeking at your password, it
> encrypts the password, but keeps the other parameters like host, port and
> user available for viewing as plaintext. You can read more about it here:
>
>   https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html
>
> The host, user, password values are grouped into what are called login
> paths which are of the form:
>
>   [some_login_path]
>   host = localhost
>   user = localuser
>
> Just like the config files you have no doubt seen before. The only way to
> set a password is to use the command:
>
>   mysql_config_editor set --login-path=some_login_path --password
>
> which will prompt the user to enter the password for the specified login
> path. The password is never seen as plain text. There are other commands to
> set, remove, print and reset values for a login path. The print command
> that shows a password will display this instead:
>
>   password = *
>
> Adding a similar feature for PostgreSQL will also require a change to the
> psql program to specify and handle --login-path used for authentication.
> This may also be the case for some of the other pg_* utilities.
>
> I think adding a feature like mysql_config_editor to PostgreSQL is an easy
> way to set up multiple "personalities" for connecting to different
> PostgreSQL servers. The password protection will deter the curious user
> from gaining access to your data. It will not stop a determined hacker, but
> the idea is to make it more difficult.
>
>
I'm wondering how it works. It stores the password encrypted in the
.mylogin.cnf file? and then the other tools can use the encrypted password
in this file to connect to the serveur without having to type a password?
In such a case, if I have access to this file, what prevents me to copy it
to another computer and connect without your authorization and without your
password? (which is exactly what you're afraid of with the .pgpass file)


> Other than this mailing list, is there a way to make a feature request for
> PostgreSQL?
>
>
This one is pretty good for this.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


[GENERAL] Why is this functional index not used?

2017-03-21 Thread ibeq GmbH
Given a country table and an order table:

CREATE TABLE g.country
(
-- inherited from table g.standard:  oid uuid NOT NULL,
-- ... some more columns inherited...
  lisocode integer NOT NULL, -- Numeric ISO 3166 code
  szisoalpha2 character varying(2), -- The 2 letter country code
  szisoalpha3 character varying(3), -- The 3 letter country code
  szisonum3 character varying(3), -- The NUMERIC country code with leading zeros
-- a few more columns here
CONSTRAINT country_pkey PRIMARY KEY (oid)
) INHERITS (g.standard)
WITH (
  OIDS=FALSE
);

This table is filled with ISO 3166 country codes.

The order table:

CREATE TABLE t."order"
(
-- inherited from table g.standard:  oid uuid NOT NULL,
-- ... some more columns inherited...
  szxmladdress text,
-- many more columns in this table
CONSTRAINT country_pkey PRIMARY KEY (oid)
) INHERITS (g.standard)
WITH (
  OIDS=FALSE
);


A typical entry in t."order".szxmladdress looks like


  ae0eb84f-9b8b-4fef-b87a-d6757bdfeaf9
  0bbdb48c-21c7-429e-944e-59a4d9ace9d5
  Hauptstraße
  


No XML field in the order table exceeds 2kB.

Getting the 2 letter country code from the xml address by this function:

CREATE OR REPLACE FUNCTION g.getxmladdresscountryid(xaddr text)
  RETURNS uuid AS
$BODY$BEGIN
RETURN (SELECT oID FROM G.Country WHERE szIsoAlpha2 = 
array_to_string(xpath('/address/@country', xAddr::xml), '') ORDER BY lIsoCode 
LIMIT 1);
END$BODY$
  LANGUAGE plpgsql IMMUTABLE;

I know that this function is not really IMMUTABLE but the country table is 
changing only every several years.

Created a functional index on the order table:

CREATE INDEX order_getxmladdresscountryid_fidx
  ON t."order"
  USING btree
  (g.getxmladdresscountryid(szxmladdress));

Joining order and country table limiting to 10 rows uses the index:

explain analyse
SELECT
GCountry.szISOAlpha2,
GCountry.szISOAlpha3,
GCountry.szISONum3
from
t.order torder
left join G.Country GCountry ON G.GetXmlAddressCountryID(TOrder.szXmlAddress) = 
GCountry.oID
limit 10

Gives:

Limit  (cost=0.56..8.45 rows=10 width=11) (actual time=0.644..4.764 rows=10 
loops=1)
  ->  Merge Right Join  (cost=0.56..10670.45 rows=13517 width=11) (actual 
time=0.642..4.754 rows=10 loops=1)
Merge Cond: (gcountry.oid = 
g.getxmladdresscountryid(torder.szxmladdress))
->  Index Scan using country_pkey on country gcountry  
(cost=0.27..38.05 rows=258 width=27) (actual time=0.025..0.067 rows=32 loops=1)
->  Index Scan using order_getxmladdresscountryid_fidx on "order" 
torder  (cost=0.29..7019.04 rows=13517 width=366) (actual time=0.020..0.058 
rows=10 loops=1)
Planning time: 0.603 ms
Execution time: 4.898 ms

But when I remove the "limit 10", the index is no longer used:

Hash Left Join  (cost=17.81..5397.46 rows=13517 width=11) (actual 
time=0.941..4721.372 rows=13510 loops=1)
  Hash Cond: (g.getxmladdresscountryid(torder.szxmladdress) = gcountry.oid)
  ->  Seq Scan on "order" torder  (cost=0.00..3504.17 rows=13517 width=366) 
(actual time=0.011..27.542 rows=13510 loops=1)
  ->  Hash  (cost=14.58..14.58 rows=258 width=27) (actual time=0.427..0.427 
rows=258 loops=1)
Buckets: 1024  Batches: 1  Memory Usage: 23kB
->  Seq Scan on country gcountry  (cost=0.00..14.58 rows=258 width=27) 
(actual time=0.008..0.226 rows=258 loops=1)
Planning time: 0.580 ms
Execution time: 4728.602 ms

Event a "limit 100"  does not use the index any more.

However, the result of 4728 ms is almost a linear increase from 10 rows to 
13500 rows.

I played around with COST of the function between 1 and 2 and with several 
options on postgresql.conf without luck.

A sequential scan on the order table alone is carried out in 15 ms.

Thanks for any idea.

Klaus





[GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Tom Ekberg

I have been working with MySQL a bit (yes, I know, heresy) and encountered a 
program called mysql_config_editor. In my opinion it does a better job of local 
password management than using a ~/.pgpass file. Instead of assuming that a 
mode of 600 will keep people from peeking at your password, it encrypts the 
password, but keeps the other parameters like host, port and user available for 
viewing as plaintext. You can read more about it here:

  https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html

The host, user, password values are grouped into what are called login paths 
which are of the form:

  [some_login_path]
  host = localhost
  user = localuser

Just like the config files you have no doubt seen before. The only way to set a 
password is to use the command:

  mysql_config_editor set --login-path=some_login_path --password

which will prompt the user to enter the password for the specified login path. 
The password is never seen as plain text. There are other commands to set, 
remove, print and reset values for a login path. The print command that shows a 
password will display this instead:

  password = *

Adding a similar feature for PostgreSQL will also require a change to the psql 
program to specify and handle --login-path used for authentication. This may 
also be the case for some of the other pg_* utilities.

I think adding a feature like mysql_config_editor to PostgreSQL is an easy way to set up 
multiple "personalities" for connecting to different PostgreSQL servers. The 
password protection will deter the curious user from gaining access to your data. It will 
not stop a determined hacker, but the idea is to make it more difficult.

Other than this mailing list, is there a way to make a feature request for 
PostgreSQL?

Tom Ekberg
Senior Computer Specialist, Lab Medicine
University of Washington Medical Center
1959 NE Pacific St, MS 357110
Seattle WA 98195
work: (206) 598-8544
email: tekb...@uw.edu




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 1:45 PM, Adrian Klaver 
wrote:

> On 03/21/2017 12:11 PM, Alexander Farber wrote:
>
>> Thank you - this has worked:
>>
>> COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH
>> (FORMAT csv);
>> 1,2,1,'1 is nice by 2','2017-03-01'
>> 1,3,1,'1 is nice by 3','2017-03-02'
>> 1,4,1,'1 is nice by 4','2017-03-03'
>> 2,1,1,'2 is nice by 1','2017-03-01'
>> 2,3,1,'2 is nice by 3','2017-03-02'
>> 2,4,0,'2 is not nice by 4','2017-03-03'
>> \.
>>
>> but I am confused about the comments that I should use \copy and not
>> just COPY and also that I could leave out WITH and brackets.
>>
>
> The difference between COPY and \copy is explained here:
>
> https://www.postgresql.org/docs/9.6/static/app-psql.html
>
> \copy 
>
> Basically COPY runs as the server user and so the files it uses have to be
> accessible by the user the server runs as. \copy is a psql meta command
> that runs as local user so it can access files the local user can see and
> has privileges on.


​I take it, then, if one chooses not to use pre-existing files and instead
inline the content as shown here, the choice between \copy and COPY becomes
a matter of style and not correctness.  As a matter of style using \copy
makes it clear that everything that is needed to make the command work
exists locally.

The advantage to using "COPY FROM|TO stdin|stdout" is that the copy command
itself could (not tested) be spread out over multiple lines - which
especially for COPY TO can be advantageous.  The entire \copy meta-command
cannot be split up.

David J.


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Adrian Klaver

On 03/21/2017 12:11 PM, Alexander Farber wrote:

Thank you - this has worked:

COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH
(FORMAT csv);
1,2,1,'1 is nice by 2','2017-03-01'
1,3,1,'1 is nice by 3','2017-03-02'
1,4,1,'1 is nice by 4','2017-03-03'
2,1,1,'2 is nice by 1','2017-03-01'
2,3,1,'2 is nice by 3','2017-03-02'
2,4,0,'2 is not nice by 4','2017-03-03'
\.

but I am confused about the comments that I should use \copy and not
just COPY and also that I could leave out WITH and brackets.


The difference between COPY and \copy is explained here:

https://www.postgresql.org/docs/9.6/static/app-psql.html

\copy 

Basically COPY runs as the server user and so the files it uses have to 
be accessible by the user the server runs as. \copy is a psql meta 
command that runs as local user so it can access files the local user 
can see and has privileges on.





Because please take a look at the 9.6.2 psql output (COPY works, and
leaving out WITH brackets - not):

words=> COPY words_reviews (uid, author, nice, review, updated) FROM
stdin WITH (FORMAT csv);
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.

1,2,1,'1 is nice by 2','2017-03-01'

1,3,1,'1 is nice by 3','2017-03-02'
1,4,1,'1 is nice by 4','2017-03-03'
2,1,1,'2 is nice by 1','2017-03-01'
2,3,1,'2 is nice by 3','2017-03-02'
2,4,0,'2 is not nice by 4','2017-03-03'
\.

>> >> >> >> >> COPY 6

words=> COPY words_reviews (uid, author, nice, review, updated) FROM
stdin FORMAT csv;
ERROR:  syntax error at or near "FORMAT"
LINE 1: ...s (uid, author, nice, review, updated) FROM stdin FORMAT csv...
 ^




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 12:45 PM, Paul Jungwirth <
p...@illuminatedcomputing.com> wrote:

> On 03/21/2017 12:21 PM, David G. Johnston wrote:
>
>> >   words=> COPY words_reviews (uid, author, nice, review, updated) FROM
>> >   stdin FORMAT csv;
>>
>> What did you read that lead you to think the above shoud work?
>>
>
> I don't know about COPY FROM, but COPY TO works without parens (or
> FORMAT), like this:
>
> db=> copy (select 1+1, 2+2) to stdout with csv;
> 2,4
>
> (tested on pg 9.5)
>
> I never use parens, so I was surprised to see in the docs and the replies
> here that they are necessary. Am I just exploiting a bug in the parser?


That's documented backward compatibility syntax:

https://www.postgresql.org/docs/9.6/static/sql-copy.html

bottom of the page.

David J.
​


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Paul Jungwirth

On 03/21/2017 12:21 PM, David G. Johnston wrote:

>   words=> COPY words_reviews (uid, author, nice, review, updated) FROM
>   stdin FORMAT csv;

What did you read that lead you to think the above shoud work?


I don't know about COPY FROM, but COPY TO works without parens (or 
FORMAT), like this:


db=> copy (select 1+1, 2+2) to stdout with csv; 


2,4

(tested on pg 9.5)

I never use parens, so I was surprised to see in the docs and the 
replies here that they are necessary. Am I just exploiting a bug in the 
parser?


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tuesday, March 21, 2017, Alexander Farber 
wrote:
>
> words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
> FORMAT csv;
>

What did you read that lead you to think the above shoud work?

David J.


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Alexander Farber
Thank you - this has worked:

COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH
(FORMAT csv);
1,2,1,'1 is nice by 2','2017-03-01'
1,3,1,'1 is nice by 3','2017-03-02'
1,4,1,'1 is nice by 4','2017-03-03'
2,1,1,'2 is nice by 1','2017-03-01'
2,3,1,'2 is nice by 3','2017-03-02'
2,4,0,'2 is not nice by 4','2017-03-03'
\.

but I am confused about the comments that I should use \copy and not just
COPY and also that I could leave out WITH and brackets.

Because please take a look at the 9.6.2 psql output (COPY works, and
leaving out WITH brackets - not):

words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
WITH (FORMAT csv);
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 1,2,1,'1 is nice by 2','2017-03-01'
1,3,1,'1 is nice by 3','2017-03-02'
1,4,1,'1 is nice by 4','2017-03-03'
2,1,1,'2 is nice by 1','2017-03-01'
2,3,1,'2 is nice by 3','2017-03-02'
2,4,0,'2 is not nice by 4','2017-03-03'
\.
>> >> >> >> >> >> COPY 6
words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
FORMAT csv;
ERROR:  syntax error at or near "FORMAT"
LINE 1: ...s (uid, author, nice, review, updated) FROM stdin FORMAT csv...
 ^


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Francisco Olarte
Alexander:

On Tue, Mar 21, 2017 at 6:31 PM, Alexander Farber
 wrote:
> I keep rereading https://www.postgresql.org/docs/9.6/static/sql-copy.html
> but just can't figure the proper syntax to put some records into the table:

It's not that complex, let's see

> words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
> WITH FORMAT 'csv';
> ERROR:  syntax error at or near "FORMAT"
> LINE 1: ...d, author, nice, review, updated) FROM stdin WITH FORMAT 'cs...
>  ^

Here you have an error ( more on this later ) so the next lines are
going to be interpreted as a new command

> words=> 1,2,1,'1 is nice by 2','2017-03-01',

Which you can clearly see because the prompt is => , initial, not ->,
continuation.

> words-> 1,3,1,'1 is nice by 3','2017-03-02',
> words-> 1,4,1,'1 is nice by 4','2017-03-03',
> words-> 2,1,1,'2 is nice by 1','2017-03-01',
> words-> 2,3,1,'2 is nice by 3','2017-03-02',

Also, you are putting an extra , at the end of the lines. This means
you have an empty string at the end, one extra fields. I do not
remember now if it hurts, but better omit it.

> words-> 2,4,0,'2 is not nice by 4','2017-03-03'
> words-> \.
> Invalid command \.. Try \? for help.

All the lines up to here are considered part of the previous sql (
remember => vs -> ) command. You are not in copy mode, so psql tries
to interpret '\.' as a meta command ( like \d ) but fails.

> words-> ;
> ERROR:  syntax error at or near "1"
> LINE 1: 1,2,1,'1 is nice by 2','2017-03-01',
> ^

And here you terminate the SQL command, so it fails ( note it referes
to the first error, the initial line with => ).

> I am not sure if FORMAT csv or FORMAT 'csv' should be used.

That is easy, try both. BUT! if you read the manual with care you will
notive it is "with ( options )", not "with options", so you are
missing parenthesis:

web_tarotpagos_staging=# create temporary table t(a varchar, b varchar);
CREATE TABLE

-- NO parens, no luck:
web_tarotpagos_staging=# copy t(a,b) from stdin with format csv;
ERROR:  syntax error at or near "format"
LINE 1: copy t(a,b) from stdin with format csv;
^
web_tarotpagos_staging=# copy t(a,b) from stdin with format 'csv';
ERROR:  syntax error at or near "format"
LINE 1: copy t(a,b) from stdin with format 'csv';


BUT, as soon as I put them:
   ^
web_tarotpagos_staging=# copy t(a,b) from stdin with (format csv);
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> \.

Note the enter data approach. Also note copy is safe to try as you can
just abort it.

> And I have tried adding/removing commas at the end of lines too.
That is commented above.

> I have looked at pg_dump output, but it does not use csv.

pg_dump uses the default text format, a little more difficult but
vastly superior ( IMNSHO ) to CSV. It ( by default ) separates records
with newlines and fields with tab, and escapes newlines, tabs and
backslashes in data with backslash, so the transformation is
contextless, much easier than csv:

Copy out: Replace NULL with '\N', newline with '\n', tab with '\t',
backslash with '\\', join fields with tab, print with newline at end.

Copy In: Read till newline, split on tabs, replace '\n' with newline,
'\t' with tab, '\\' with backslash.

Much easier to get right than CSV ( how do you encode the C string ",;
\n\"\n\t;  \t\"\'" ? )

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread John R Pierce

On 3/21/2017 10:31 AM, Alexander Farber wrote:
words=> COPY words_reviews (uid, author, nice, review, updated) FROM 
stdin WITH FORMAT 'csv';

ERROR:  syntax error at or near "FORMAT"
LINE 1: ...d, author, nice, review, updated) FROM stdin WITH FORMAT 'cs...


its just csv, not 'csv' ...


And I have tried adding/removing commas at the end of lines too.


if its getting a syntax error on the COPY command, its not yet read the 
data...


note that COPY  FROM STDIN  doesn't actually read from stdin, it 
requires the data to be passed through to it with a special API 
(PQputCopyData() in libpq, or similar in other APIs).   you can use 
\copy in psql to stream the data from the same input.





--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 10:31 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Good evening,
>
> I keep rereading https://www.postgresql.org/docs/9.6/static/sql-copy.html
> but just can't figure the proper syntax to put some records into the table:
> ​[...]​
>
> words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
> WITH FORMAT 'csv';
> ERROR:  syntax error at or near "FORMAT"
> LINE 1: ...d, author, nice, review, updated) FROM stdin WITH FORMAT 'cs...
>
>
​​[ [ WITH ] ( option [, ...] ) ]

The above means the entire "WITH" section is optional, as is the word
WITH.  However, if you want to add "with" options they must appear within
parentheses, those are not optional.  Multiple options can appear within
the single set of parentheses.

"""
FORMAT
Selects the data format to be read or written: text, csv (Comma Separated
Values), or binary. The default is text.
"""

Valid values for format are as listed, no single quote required (not sure
about if they are allowed)

Therefore:

WITH (FORMAT csv)

David J.


[GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Alexander Farber
Good evening,

I keep rereading https://www.postgresql.org/docs/9.6/static/sql-copy.html
but just can't figure the proper syntax to put some records into the table:

words=> \d words_reviews;
  Table "public.words_reviews"
 Column  |   Type   | Modifiers
-+--+---
 uid | integer  | not null
 author  | integer  | not null
 nice| integer  | not null
 review  | character varying(255)   |
 updated | timestamp with time zone | not null
Indexes:
"words_reviews_pkey" PRIMARY KEY, btree (uid, author)
Check constraints:
"words_reviews_check" CHECK (uid <> author)
"words_reviews_nice_check" CHECK (nice = 0 OR nice = 1)
Foreign-key constraints:
"words_reviews_author_fkey" FOREIGN KEY (author) REFERENCES
words_users(uid) ON DELETE CASCADE
"words_reviews_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid)
ON DELETE CASCADE

words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin
WITH FORMAT 'csv';
ERROR:  syntax error at or near "FORMAT"
LINE 1: ...d, author, nice, review, updated) FROM stdin WITH FORMAT 'cs...
 ^
words=> 1,2,1,'1 is nice by 2','2017-03-01',
words-> 1,3,1,'1 is nice by 3','2017-03-02',
words-> 1,4,1,'1 is nice by 4','2017-03-03',
words-> 2,1,1,'2 is nice by 1','2017-03-01',
words-> 2,3,1,'2 is nice by 3','2017-03-02',
words-> 2,4,0,'2 is not nice by 4','2017-03-03'
words-> \.
Invalid command \.. Try \? for help.
words-> ;
ERROR:  syntax error at or near "1"
LINE 1: 1,2,1,'1 is nice by 2','2017-03-01',
^

I am not sure if FORMAT csv or FORMAT 'csv' should be used.

And I have tried adding/removing commas at the end of lines too.

I have looked at pg_dump output, but it does not use csv.

Regards
Alex


Re: [GENERAL] Why is this functional index not used?

2017-03-21 Thread Klaus P. Pieper
I played around with COST up to 99 – still no improvement. 

 

HOWEVER – I determined that more than 95% of the records in the database are 
local addresses and the indexed function returns NULL. 

This seems to create an issue with LEFT JOIN. 

 

Increasing the COST of the function to 2 and rewriting the query as follows 
provides a significantly better result: 

 

explain analyse 

SELECT

GCountry.szISOAlpha2,

GCountry.szISOAlpha3,

GCountry.szISONum3

from 

t.order torder 

join G.Country GCountry ON G.GetXmlAddressCountryID(TOrder.szXmlAddress) = 
GCountry.oID

UNION ALL

SELECT

'XX'::TEXT, -- can use NULL::TEXT with no difference 

'XXX'::TEXT, -- can use NULL::TEXT with no difference

'999'::TEXT -- can use NULL::TEXT with no difference

from 

t.order torder 

WHERE G.GetXmlAddressCountryID(TOrder.szXmlAddress) IS NULL

 

Gives: 

 

Append  (cost=0.29..7661.82 rows=13578 width=11) (actual time=0.106..20.464 
rows=13510 loops=1)

  ->  Nested Loop  (cost=0.29..7386.57 rows=13510 width=11) (actual 
time=0.104..1.235 rows=73 loops=1)

->  Seq Scan on country gcountry  (cost=0.00..14.58 rows=258 width=27) 
(actual time=0.012..0.126 rows=258 loops=1)

->  Index Scan using order_getxmladdresscountryid_fidx on "order" 
torder  (cost=0.29..27.89 rows=68 width=366) (actual time=0.003..0.003 rows=0 
loops=258)

  Index Cond: (g.getxmladdresscountryid(szxmladdress) = 
gcountry.oid)

  ->  Index Scan using order_getxmladdresscountryid_fidx on "order" torder_1  
(cost=0.29..139.47 rows=68 width=0) (actual time=0.013..12.840 rows=13437 
loops=1)

Index Cond: (g.getxmladdresscountryid(szxmladdress) IS NULL)

Planning time: 1.104 ms

Execution time: 23.607 ms

 

A similar result was achieved by return a dummy ID from the function for 
records without country. 

 

Any idea why the functional index does not work in a LEFT JOIN query? 

 

Thanks Klaus

 

 

Von: Jeff Janes [mailto:jeff.ja...@gmail.com] 
Gesendet: Dienstag, 21. März 2017 08:22
An: Klaus P. Pieper
Betreff: Re: [GENERAL] Why is this functional index not used?

 

On Mon, Mar 20, 2017 at 1:25 PM, Klaus P. Pieper <  
kpi6...@gmail.com> wrote:

 

I played around with COST of the function between 1 and 2 and with several 
options on postgresql.conf without luck.

 

Why not more than 2 ?   The true value could be much higher than that.

 

Cheers,

 

Jeff



Re: [GENERAL] Unexpected interval comparison

2017-03-21 Thread Adrian Klaver

On 03/21/2017 07:42 AM, Tom Lane wrote:

Frazer McLean  writes:

I came across an unexpected comparison (tested on PostgreSQL 9.4 and
9.6) for intervals with a large difference in magnitude.



  '1 year'::interval > '32618665 years'::interval;



Is this a bug?


It looks like the problem is overflow of the result of interval_cmp_value,
because it's trying to compute

=# select '32618665'::int8 * 30 * 86400 * 100;
ERROR:  bigint out of range

It's not immediately obvious how to avoid that while preserving the same
comparison semantics :-(


Not sure if it helps but this works:

test=# select extract(epoch from '1 year'::interval) > extract(epoch 
from '32618665 years'::interval);

 ?column?
--
 f



regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected interval comparison

2017-03-21 Thread Tom Lane
Frazer McLean  writes:
> I came across an unexpected comparison (tested on PostgreSQL 9.4 and
> 9.6) for intervals with a large difference in magnitude.

>   '1 year'::interval > '32618665 years'::interval;

> Is this a bug?

It looks like the problem is overflow of the result of interval_cmp_value,
because it's trying to compute

=# select '32618665'::int8 * 30 * 86400 * 100;
ERROR:  bigint out of range

It's not immediately obvious how to avoid that while preserving the same
comparison semantics :-(

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Unexpected interval comparison

2017-03-21 Thread Frazer McLean
I came across an unexpected comparison (tested on PostgreSQL 9.4 and
9.6) for intervals with a large difference in magnitude.


I narrowed it down to this example, where comparisons with this range
give the wrong value:


postgres=# SELECT 

  '1 year'::interval > '3854933 years'::interval,

  '1 year'::interval > '3854934 years'::interval,

  '1 year'::interval > '32618664 years'::interval,

  '1 year'::interval > '32618665 years'::interval;



?column? | ?column? | ?column? | ?column?

--+--+--+--

f| t| t| f

(1 row)



Is this a bug? Should I not be comparing intervals? It would seem the
interval type has enough information to give the correct answer here.


Regards,



Frazer McLean