Re: Logging [RESOLVED]

2019-12-06 Thread Peter J. Holzer
On 2019-12-04 16:03:24 -0800, Rich Shepard wrote:
> On Wed, 4 Dec 2019, Stephen Eilert wrote:
> > Usually, this is done by logrotate or a similar mechanism in your system.
> > You’ll likely find that other logs in your system follow a similar
> > pattern, not just Postgresql.
> 
> I just checked /etc/logrotate.d/postgres and it was set at daily with rotate
> 7. I changed rotate to 4 but the dates are 3-4 days apart, not sequential.

Your log files are extremely small. At only a few hundred bytes every 3
or 4 days it is very likely that nothing is logged on most days. If your
log file is empty, logrotate won't rotate it if the option "notifempty"
is set (which is probably the case).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: migration from 9.4 to 9.6

2019-12-06 Thread Alan Hodgson
On Fri, 2019-12-06 at 21:38 +, Julie Nishimura wrote:
> I'd like to copy one single database from 9.4 cluster to a new 9.6
> cluster (migration with the upgrade), to the different host
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 

Put 9.4 on the new server. Replicate the db to it. When you're ready to
switch, shut down the master, promote the new db, and then shut it down
and pg_upgrade -k it to 9.6.

That does require the binaries from both versions to be on the new
server for a while, but it'll give you the least downtime and it's a
very simple replication setup.




Re: migration from 9.4 to 9.6

2019-12-06 Thread Adrian Klaver

On 12/6/19 1:38 PM, Julie Nishimura wrote:
I'd like to copy one single database from 9.4 cluster to a new 9.6 
cluster (migration with the upgrade), to the different host


Choices:

1) Dump/restore.

2) Afore mentioned Slony.

3) The pglogical 
extension(https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/). 
The forerunner to the built in logical replication in 10+.






*From:* Adrian Klaver 
*Sent:* Friday, December 6, 2019 1:32 PM
*To:* Julie Nishimura ; 
pgsql-general@lists.postgresql.org ; 
pgsql-general 

*Subject:* Re: migration from 9.4 to 9.6
On 12/6/19 1:19 PM, Julie Nishimura wrote:
Hello there, what would be my best way to migrate 2 tb single database 
from cluster 9.4 to another machine on 9.6?


Not clear if you want to add database to an existing 9.6 cluster or
create a new 9.6 cluster containing the 9.4 database?



Thank you!



--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: migration from 9.4 to 9.6

2019-12-06 Thread Julie Nishimura
I'd like to copy one single database from 9.4 cluster to a new 9.6 cluster 
(migration with the upgrade), to the different host



From: Adrian Klaver 
Sent: Friday, December 6, 2019 1:32 PM
To: Julie Nishimura ; pgsql-general@lists.postgresql.org 
; pgsql-general 

Subject: Re: migration from 9.4 to 9.6

On 12/6/19 1:19 PM, Julie Nishimura wrote:
> Hello there, what would be my best way to migrate 2 tb single database
> from cluster 9.4 to another machine on 9.6?

Not clear if you want to add database to an existing 9.6 cluster or
create a new 9.6 cluster containing the 9.4 database?

>
> Thank you!


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: migration from 9.4 to 9.6

2019-12-06 Thread Adrian Klaver

On 12/6/19 1:19 PM, Julie Nishimura wrote:
Hello there, what would be my best way to migrate 2 tb single database 
from cluster 9.4 to another machine on 9.6?


Not clear if you want to add database to an existing 9.6 cluster or 
create a new 9.6 cluster containing the 9.4 database?




Thank you!



--
Adrian Klaver
adrian.kla...@aklaver.com




migration from 9.4 to 9.6

2019-12-06 Thread Julie Nishimura
Hello there, what would be my best way to migrate 2 tb single database from 
cluster 9.4 to another machine on 9.6?

Thank you!


Re: upgrade and migrate

2019-12-06 Thread Julie Nishimura
These are great points. Thanks! I will push for some testing environment


From: Justin 
Sent: Friday, December 6, 2019 10:06 AM
To: Julie Nishimura 
Cc: Tom Lane ; Stephen Frost ; Laurenz 
Albe ; Michael Paquier ; 
pgsql-general@lists.postgresql.org ; 
pgsql-general 
Subject: Re: upgrade and migrate

Off topic but food for thought given the jump in versions 8.3 to (9.6 or 
greater.)

List of major changes i can think of to watch out for that can bite without 
warning..

SQL operators~=~   for Like were drop in 8.4 => for hstore was drop in 9.0
in 9.1 standard_conforming_string is  ON by default   previous the default was 
Off..
9.4 affected  arrays.
9.5 operator precedence was changed  this affects output and near impossible to 
find all the code it can affect..
10 removed contrib/tsearch2 (added to the core in 8.3)

There are allot more minor changes that can bit

Has a test machine been setup and run yet??  just dumping the schema and 
restore to later version will point out several issues with custom data-types,  
plpgsql, other PL languages and any missing extensions...








On Fri, Dec 6, 2019 at 11:16 AM Julie Nishimura 
mailto:juliez...@hotmail.com>> wrote:
Thank you everybody for your suggestions.
So, to summarize - we can run pg_upgrade from 8.3 to 9.4 (in place), fix app 
related issues (if any), then migrate to a version more recent than 9.6, either 
through pg_basebackup or through logical replication (if we would upgrade to 
version 10).

First step requires downtime. Second does not. Correct?

-Julie


From: Tom Lane mailto:t...@sss.pgh.pa.us>>
Sent: Wednesday, December 4, 2019 5:28 AM
To: Stephen Frost mailto:sfr...@snowman.net>>
Cc: Laurenz Albe mailto:laurenz.a...@cybertec.at>>; 
Michael Paquier mailto:mich...@paquier.xyz>>; Julie 
Nishimura mailto:juliez...@hotmail.com>>; 
pgsql-general@lists.postgresql.org 
mailto:pgsql-general@lists.postgresql.org>>;
 pgsql-general 
mailto:pgsql-gene...@postgresql.org>>
Subject: Re: upgrade and migrate

Stephen Frost mailto:sfr...@snowman.net>> writes:
> * Laurenz Albe (laurenz.a...@cybertec.at) 
> wrote:
>> Right, Slony is the way to go, since pg_upgrade doesn't support 8.3.
>> I would upgrade to a version more recent than 9.6.

> So...  there's a bit of history here.  pg_upgrade in 9.4 actually does
> support upgrading from 8.3.X.  Support for upgrading from 8.3 was
> removed in 2209b3923a7afe0b6033ecfea972219df252ca8e.

Yeah.  Also note that 8.3 to 9.6-or-newer is going to be a pretty
huge jump in terms of minor compatibility issues (have you read
all the relevant release notes?).  So there's something to be said
for breaking this down into two steps: update to 9.4, test/fix
your applications against that, then make a second jump to something
current.  Each of those jumps could be handled by the respective
version of pg_upgrade.  I concur with Laurenz's advice that stopping
at 9.6 is probably not your best choice for a migration today.

regards, tom lane


Re: Slow planing...

2019-12-06 Thread Pavel Stehule
pá 6. 12. 2019 v 15:12 odesílatel Mladen Marinović <
mladen.marino...@kset.org> napsal:

> After a couple of hours of trying different stuff, set enable_mergejoin  =
> off made the planning time look better: Planning time: 0.322 ms
> Any ideas why this helps?
>

pls, can you try reindex all related indexes? Sometimes planning time is
high when indexes are bloated.

Regards

Pavel


> Regards,
> Mladen Marinović
>
> On Fri, Dec 6, 2019 at 11:14 AM Mladen Marinović <
> mladen.marino...@kset.org> wrote:
>
>> Hi,
>>
>> Since this morning our system is running slower than usual. It turns out
>> that some queries take a very long time to plan ( > 1 second). The problem
>> occurs when joining bigger tables. There are no partition for the used
>> tables. The problem has a time correlation with the last
>> autovacuum/autoanalyse this morning, but manual vacuuming and analysing did
>> not fix the problem.
>>
>> An example explain is:
>>
>> EXPLAIN ANALYSE
>> SELECT 1
>> FROM table_a a
>>   LEFT JOIN table_b bON b.a_id= a.id
>> WHERE a.object_id=13
>>   AND a.timestamp<'2019-12-06'
>>   AND a.timestamp>'2019-12-03'
>>
>> Nested Loop Left Join  (cost=1.28..18137.57 rows=6913 width=4) (actual
>> time=0.043..90.016 rows=14850 loops=1)
>>   ->  Index Scan using uq_object_id_timestamp on table_a a
>>  (cost=0.70..7038.49 rows=6913 width=8) (actual time=0.028..21.832
>> rows=14850 loops=1)
>> Index Cond: ((object_id = 13) AND (timestamp <
>> '2019-12-06'::timestamp with time zone) AND (timestamp >
>> '2019-12-03'::timestamp with time zone))
>>   ->  Index Only Scan using table_b_a_id on table_b b  (cost=0.57..1.60
>> rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=14850)
>> Index Cond: (a_id = a.id)
>> Heap Fetches: 0
>> Planning time: 1908.550 ms
>> Execution time: 91.004 ms
>>
>> The same query on a similar parallel system takes 5ms for planing (PG
>> 9.4.).
>>
>> Is there a way to detect why the planing is taking this long?
>>
>> The database is a 9.6.1 with 32GB of shared_buffers, and 1GB of
>> maintanance_work_mem, and machine CPU is below 80% all the time.
>>
>> Regards,
>> Mladen Marinović
>>
>


Re: upgrade and migrate

2019-12-06 Thread Justin
Off topic but food for thought given the jump in versions 8.3 to (9.6 or
greater.)

List of major changes i can think of to watch out for that can bite without
warning..

SQL operators~=~   for Like were drop in 8.4 => for hstore was drop in 9.0
in 9.1 standard_conforming_string is  ON by default   previous the default
was Off..
9.4 affected  arrays.
9.5 operator precedence was changed  this affects output and near
impossible to find all the code it can affect..
10 removed contrib/tsearch2 (added to the core in 8.3)

There are allot more minor changes that can bit

Has a test machine been setup and run yet??  just dumping the schema and
restore to later version will point out several issues with custom
data-types,  plpgsql, other PL languages and any missing extensions...








On Fri, Dec 6, 2019 at 11:16 AM Julie Nishimura 
wrote:

> Thank you everybody for your suggestions.
> So, to summarize - we can run pg_upgrade from 8.3 to 9.4 (in place), fix
> app related issues (if any), then migrate to a version more recent than
> 9.6, either through pg_basebackup or through logical replication (if we
> would upgrade to version 10).
>
> First step requires downtime. Second does not. Correct?
>
> -Julie
>
> --
> *From:* Tom Lane 
> *Sent:* Wednesday, December 4, 2019 5:28 AM
> *To:* Stephen Frost 
> *Cc:* Laurenz Albe ; Michael Paquier <
> mich...@paquier.xyz>; Julie Nishimura ;
> pgsql-general@lists.postgresql.org ;
> pgsql-general 
> *Subject:* Re: upgrade and migrate
>
> Stephen Frost  writes:
> > * Laurenz Albe (laurenz.a...@cybertec.at) wrote:
> >> Right, Slony is the way to go, since pg_upgrade doesn't support 8.3.
> >> I would upgrade to a version more recent than 9.6.
>
> > So...  there's a bit of history here.  pg_upgrade in 9.4 actually does
> > support upgrading from 8.3.X.  Support for upgrading from 8.3 was
> > removed in 2209b3923a7afe0b6033ecfea972219df252ca8e.
>
> Yeah.  Also note that 8.3 to 9.6-or-newer is going to be a pretty
> huge jump in terms of minor compatibility issues (have you read
> all the relevant release notes?).  So there's something to be said
> for breaking this down into two steps: update to 9.4, test/fix
> your applications against that, then make a second jump to something
> current.  Each of those jumps could be handled by the respective
> version of pg_upgrade.  I concur with Laurenz's advice that stopping
> at 9.6 is probably not your best choice for a migration today.
>
> regards, tom lane
>


Re: Q: cert authentication and user remapping fails

2019-12-06 Thread Tom Lane
Albrecht =?iso-8859-1?b?RHJl3w==?=  writes:
> In my installation, the user certificate CN's contain human-readable names 
> (utf8, with spaces, etc.).  I want *all* users connecting with cert 
> authentication to be mapped to a certain database role.

I don't think that the user name mapping feature works in the way
you are hoping it does.  According to
https://www.postgresql.org/docs/current/auth-username-maps.html
what the map does is to specify allowed combinations of the validated
external user name ("Albrecht Dreß" in your example) and the database
role the user asked to connect as.  So given

> certaccess/^.*$   testuser

it should be possible to do

psql -h dbserver -U testuser testdb

with a certificate that has CN="Albrecht Dreß" (or anything else).
But the map won't result in silently connecting you as some other
role than the one you asked for.

(I haven't actually tried this, but that's how I read the docs.)

regards, tom lane




Q: cert authentication and user remapping fails

2019-12-06 Thread Albrecht Dreß

Hi all,

I have a problem with psql cert authentication and user mapping.

In my installation, the user certificate CN's contain human-readable names 
(utf8, with spaces, etc.).  I want *all* users connecting with cert 
authentication to be mapped to a certain database role.  The server runs on 
Debian Stretch, using the package “postgresql-10” ver. “10.11-1.pgdg90+1”.

The configuration in pg_hba.conf is, inter alia


hostssl testdb  all 172.16.61.0/24  cert map=certaccess


The file pg_ident.conf contains the line (which should, as the re matches 
*everything*, map all users?)


certaccess  /^.*$   testuser


I have a user certificate, issued by the same CA as the server cert, with CN 
"Albrecht Dreß".  Running psql on a remote client fails:


albrecht@deneb:~$ psql -h dbserver -U "Albrecht Dreß" testdb
psql: FATAL:  certificate authentication failed for user "Albrecht Dreß"
FATAL:  no pg_hba.conf entry for host "172.16.61.70", user "Albrecht Dreß", database 
"testdb", SSL off


The server log says:


Albrecht Dreß@testdb LOG:  no match in usermap "certaccess" for user "Albrecht Dreß" 
authenticated as "Albrecht Dreß"
Albrecht Dreß@testdb FATAL:  certificate authentication failed for user "Albrecht 
Dreß"
Albrecht Dreß@testdb DETAIL:  Connection matched pg_hba.conf line 136: "  hostssl
testdball 172.16.61.0/24 cert map=certaccess"
Albrecht Dreß@testdb FATAL:  no pg_hba.conf entry for host "172.16.61.70", user "Albrecht 
Dreß", database "testdb", SSL off


For me, this looks as if the certificate is accepted, but the regexp match of 
the CN somehow fails.

Note 1: I don't have a role “Albrecht Dreß” defined.
Note 2: using my “real” user name (albrecht), i.e. omitting the “-U” option 
above, fails with the server log message “provided user name (albrecht) and 
authenticated user name (Albrecht Dreß) do not match”.

Any idea what I did wrong, and how I can a working cert authentication?

Thanks in advance,
Albrecht.

pgpXpcgQfI1rl.pgp
Description: PGP signature


Re: upgrade and migrate

2019-12-06 Thread Adrian Klaver

On 12/6/19 8:14 AM, Julie Nishimura wrote:

Thank you everybody for your suggestions.
So, to summarize - we can run pg_upgrade from 8.3 to 9.4 (in place), fix 
app related issues (if any), then migrate to a version more recent than 
9.6, either through pg_basebackup or through logical replication (if we 
would upgrade to version 10).


First step requires downtime. Second does not. Correct?


Well pg_basebackup is a binary copy so I am pretty sure you cannot use 
the copy from an old Postgres version with a new Postgres major version. 
I am also pretty sure whatever you do there is going to be some 
downtime. Left unsaid to date and relevant to downtime:


1) Space/machines available to juggle multiple Postgres instances?

2) Network proximity of above.

3) Whether it is essential all the databases remain in a single cluster?



-Julie


*From:* Tom Lane 
*Sent:* Wednesday, December 4, 2019 5:28 AM
*To:* Stephen Frost 
*Cc:* Laurenz Albe ; Michael Paquier 
; Julie Nishimura ; 
pgsql-general@lists.postgresql.org ; 
pgsql-general 

*Subject:* Re: upgrade and migrate
Stephen Frost  writes:

* Laurenz Albe (laurenz.a...@cybertec.at) wrote:

Right, Slony is the way to go, since pg_upgrade doesn't support 8.3.
I would upgrade to a version more recent than 9.6.



So...  there's a bit of history here.  pg_upgrade in 9.4 actually does
support upgrading from 8.3.X.  Support for upgrading from 8.3 was
removed in 2209b3923a7afe0b6033ecfea972219df252ca8e.


Yeah.  Also note that 8.3 to 9.6-or-newer is going to be a pretty
huge jump in terms of minor compatibility issues (have you read
all the relevant release notes?).  So there's something to be said
for breaking this down into two steps: update to 9.4, test/fix
your applications against that, then make a second jump to something
current.  Each of those jumps could be handled by the respective
version of pg_upgrade.  I concur with Laurenz's advice that stopping
at 9.6 is probably not your best choice for a migration today.

     regards, tom lane



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: upgrade and migrate

2019-12-06 Thread Julie Nishimura
Thank you everybody for your suggestions.
So, to summarize - we can run pg_upgrade from 8.3 to 9.4 (in place), fix app 
related issues (if any), then migrate to a version more recent than 9.6, either 
through pg_basebackup or through logical replication (if we would upgrade to 
version 10).

First step requires downtime. Second does not. Correct?

-Julie


From: Tom Lane 
Sent: Wednesday, December 4, 2019 5:28 AM
To: Stephen Frost 
Cc: Laurenz Albe ; Michael Paquier 
; Julie Nishimura ; 
pgsql-general@lists.postgresql.org ; 
pgsql-general 
Subject: Re: upgrade and migrate

Stephen Frost  writes:
> * Laurenz Albe (laurenz.a...@cybertec.at) wrote:
>> Right, Slony is the way to go, since pg_upgrade doesn't support 8.3.
>> I would upgrade to a version more recent than 9.6.

> So...  there's a bit of history here.  pg_upgrade in 9.4 actually does
> support upgrading from 8.3.X.  Support for upgrading from 8.3 was
> removed in 2209b3923a7afe0b6033ecfea972219df252ca8e.

Yeah.  Also note that 8.3 to 9.6-or-newer is going to be a pretty
huge jump in terms of minor compatibility issues (have you read
all the relevant release notes?).  So there's something to be said
for breaking this down into two steps: update to 9.4, test/fix
your applications against that, then make a second jump to something
current.  Each of those jumps could be handled by the respective
version of pg_upgrade.  I concur with Laurenz's advice that stopping
at 9.6 is probably not your best choice for a migration today.

regards, tom lane


Re: Slow planing...

2019-12-06 Thread Michael Lewis
On Fri, Dec 6, 2019, 3:21 AM Mladen Marinović 
wrote:

> Is there a way to detect why the planing is taking this long?
>
> The database is a 9.6.1 with 32GB of shared_buffers, and 1GB of
> maintanance_work_mem, and machine CPU is below 80% all the time.
>

What is default_statistics_target set to for your cluster or the involved
tables/columns? When raised above the default 100, planning time can
increase exponentially it seems, as the planner examines the stats and
weighs the many options.

>


Re: Insert Table from Execute String Query

2019-12-06 Thread Adrian Klaver

On 12/6/19 1:00 AM, İlyas Derse wrote:
I need to insert temp table from execute string query. How can I do ? 
I'm trying like that but not working.


CREATE OR REPLACE FUNCTION public.testdyn
(
         x integer
)
RETURNS TABLE
(
         id bigint,
         text character varying(4000)
)
AS $$
         DECLARE mysql TEXT;
BEGIN
         create temp table tmp1
(
id1 bigint,
text character varying(4000)
);
mysql = 'select id, text from TEST';

RETURN QUERY
       EXECUTE mysql INTO tmp1 ;
END;
$$ LANGUAGE plpgsql;


It would help to know what you are trying to do. In the meantime:

1)
create table test (id bigint, text varchar);
insert into test values (1, 'test1'), (2, 'test2');

2) Running your function:

select testdyn(1);
ERROR:  query "SELECT mysql INTO tmp1" is not a SELECT
CONTEXT:  PL/pgSQL function testdyn(integer) line 11 at RETURN QUERY

3) Change to function:

CREATE OR REPLACE FUNCTION public.testdyn(x integer)
 RETURNS TABLE(id bigint, text character varying)
 LANGUAGE plpgsql
AS $function$
DECLARE mysql TEXT;
BEGIN
create temp table tmp1
(
id1 bigint,
text character varying(4000)
);
mysql = 'select id, text from TEST';
drop table tmp1;
RETURN QUERY
  EXECUTE mysql;
END;
$function$

select * from testdyn(1);
 id | text
+---
  1 | test1
  2 | test2




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Slow planing...

2019-12-06 Thread Tom Lane
=?UTF-8?Q?Mladen_Marinovi=C4=87?=  writes:
> After a couple of hours of trying different stuff, set enable_mergejoin  =
> off made the planning time look better: Planning time: 0.322 ms
> Any ideas why this helps?

Hmm, possibly because it's reducing the number of calls to
get_actual_variable_range.  Do you have an indexed column,
with a lot of recently-dead entries at the end of its range,
that is a join key in the slow query?  If so, you might be
hitting the problem that was fixed (not for 9.6) here:

https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=3ca930fc3

regards, tom lane




Re: Slow planing...

2019-12-06 Thread Mladen Marinović
After a couple of hours of trying different stuff, set enable_mergejoin  =
off made the planning time look better: Planning time: 0.322 ms
Any ideas why this helps?

Regards,
Mladen Marinović

On Fri, Dec 6, 2019 at 11:14 AM Mladen Marinović 
wrote:

> Hi,
>
> Since this morning our system is running slower than usual. It turns out
> that some queries take a very long time to plan ( > 1 second). The problem
> occurs when joining bigger tables. There are no partition for the used
> tables. The problem has a time correlation with the last
> autovacuum/autoanalyse this morning, but manual vacuuming and analysing did
> not fix the problem.
>
> An example explain is:
>
> EXPLAIN ANALYSE
> SELECT 1
> FROM table_a a
>   LEFT JOIN table_b bON b.a_id= a.id
> WHERE a.object_id=13
>   AND a.timestamp<'2019-12-06'
>   AND a.timestamp>'2019-12-03'
>
> Nested Loop Left Join  (cost=1.28..18137.57 rows=6913 width=4) (actual
> time=0.043..90.016 rows=14850 loops=1)
>   ->  Index Scan using uq_object_id_timestamp on table_a a
>  (cost=0.70..7038.49 rows=6913 width=8) (actual time=0.028..21.832
> rows=14850 loops=1)
> Index Cond: ((object_id = 13) AND (timestamp <
> '2019-12-06'::timestamp with time zone) AND (timestamp >
> '2019-12-03'::timestamp with time zone))
>   ->  Index Only Scan using table_b_a_id on table_b b  (cost=0.57..1.60
> rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=14850)
> Index Cond: (a_id = a.id)
> Heap Fetches: 0
> Planning time: 1908.550 ms
> Execution time: 91.004 ms
>
> The same query on a similar parallel system takes 5ms for planing (PG
> 9.4.).
>
> Is there a way to detect why the planing is taking this long?
>
> The database is a 9.6.1 with 32GB of shared_buffers, and 1GB of
> maintanance_work_mem, and machine CPU is below 80% all the time.
>
> Regards,
> Mladen Marinović
>


Re: archiving question

2019-12-06 Thread Magnus Hagander
On Fri, Dec 6, 2019 at 12:06 PM Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:

> > On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ)  markus.zwett...@zuerich.ch> wrote:
> >> -Ursprüngliche Nachricht-
> >> Von: Michael Paquier 
> >> Gesendet: Freitag, 6. Dezember 2019 02:43
> >> An: Zwettler Markus (OIZ) 
> >> Cc: Stephen Frost ; mailto:
> pgsql-general@lists.postgresql.org
> >> Betreff: Re: archiving question
> >>
> >> On Thu, Dec 05, 2019 at 03:04:55PM +, Zwettler Markus (OIZ) wrote:
> >> > What do you mean hear?
> >> >
> >> > Afaik, Postgres runs the archive_command per log, means log by log by
> log.
> >> >
> >> > How should we parallelize this?
> >>
> >> You can, in theory, skip the archiving for a couple of segments and
> then do the
> >> operation at once without the need to patch Postgres.
> >> --
> >> Michael
> >
> >
> >Sorry, I am still confused.
> >
> >Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory
> away and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?
> >
> >No, *absolutely* not.
> >
> >What you can do is have archive_command copy things one by one to a local
> directory (still sequentially), and then you can have a separate process
> that sends these to the archive -- and *this* process can be parallelized.
> >
> >//Magnus
>
>
>
> That has been my initial question.
>
> Is there a way to tune this sequential archive_command log by log copy in
> case I have tons of logs within the pg_xlog directory?
>

It will be called one by one, there is no changing that. What you *do* with
that command is up to you, so you can certainly tune that. But as soon as
your command has returned PostgreSQL wil lhave the "right" to remove the
file if it thinks it's time. But you could for example have a daemon that
opens a file handle to the file in response to your archive command thereby
preventing it from actually being removed, and then archives them in
private, in which case the archiving only has to wait for it to acknowledge
the process has started, not finished.

There's always a risk involved in returning from archive_command before the
file is safely stored on a different machine/storage somewhere. The more
async you make it the bigger that risk is, but it increases your ability to
parallelize.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [11.6] WALs recovery ordering with Restore Command - timelines

2019-12-06 Thread Pavel Suderevsky
One more log example:
>
> 2019-12-06 07:11:16 CST  LOG:  database system was shut down in recovery
> at 2019-12-06 07:11:08 CST
> Searching WAL: 0022.history, location: pg_wal/RECOVERYHISTORY
> 2019-12-06 07:11:16 CST  LOG:  restored log file "0022.history" from
> archive
> Searching WAL: 0023.history, location: pg_wal/RECOVERYHISTORY
> 2019-12-06 07:11:16 CST  LOG:  entering standby mode
> Searching WAL: 0022.history, location: pg_wal/RECOVERYHISTORY
> 2019-12-06 07:11:16 CST  LOG:  restored log file "0022.history" from
> archive
> Searching WAL: 002218C6003F, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118C6003F, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:20 CST  LOG:  restored log file
> "002118C6003F" from archive
> Searching WAL: 0021.history, location: pg_wal/RECOVERYHISTORY
> 2019-12-06 07:11:20 CST  LOG:  restored log file "0021.history" from
> archive
> Searching WAL: 002218BF001B, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118BF001B, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:27 CST  LOG:  restored log file
> "002118BF001B" from archive
> 2019-12-06 07:11:27 CST  LOG:  redo starts at 18BF/1B311260
> Searching WAL: 002218BF001C, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118BF001C, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:34 CST  LOG:  restored log file
> "002118BF001C" from archive
> Searching WAL: 002218BF001D, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118BF001D, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:40 CST  LOG:  restored log file
> "002118BF001D" from archive
> Searching WAL: 002218BF001E, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118BF001E, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:46 CST  LOG:  restored log file
> "002118BF001E" from archive
> Searching WAL: 002218BF001F, location: pg_wal/RECOVERYXLOG
> Searching WAL: 002118BF001F, location: pg_wal/RECOVERYXLOG
> 2019-12-06 07:11:53 CST  LOG:  restored log file
> "002118BF001F" from archive
>
Command:
*restore_command = 'echo -e "Searching WAL: %f, location: %p";
/usr/bin/pgbackrest --stanza=platform archive-get %f "%p"'*

As you can see Postgres tries restoring* 0022** WALs *before timeline
switch LSN* is reached while restoring *0021**.

Should I resend this issue to a pgsql-bugs
?

пн, 25 нояб. 2019 г. в 23:26, Pavel Suderevsky :

> Hi,
>
> PostgreSQL 11.6
> Centos 7.6
> pgBackrest 2.19
>
>
> Case:
> Master is on timeline 15 while Standby that should be synchronized with
> Master as a hot standby is on timeline 13. WALs to be obtained from archive.
>
> recovery.conf:
> >primary_conninfo = 'user=usename host=10.10.10.3 port=5432 sslmode=prefer
> application_name=pg2'
> >recovery_target_timeline = 'latest'
> >restore_command = '/usr/bin/pgbackrest --stanza=platform archive-get %f
> "%p" --log-level-console info'
> >standby_mode = 'on'
>
> Standby recovery is done like that:
> 1. try restore 000F0A2700E5
> 2. unable to find 000F0A2700E5 in the archive
> 3. try restore 000E0A2700E5
> 4. found 000E0A2700E5 in the archive
> -- trying next WAL for BOTH timelines
> 1. try restore 000F0A2700E6
> 2. unable to find 000F0A2700E6 in the archive
> 3. try restore 000E0A2700E6
> 4. found 000E0A2700E6 in the archive
>
> Why does Postgres restore WALs not in a timelines order? First 13, then
> 14, then 15. Up to timeline switch position. WALs it try to restore for the
> latest timeline are deliberately not yet exist. It leads to terrible
> recovery performance because of long "unable to find" operations.
>


AW: archiving question

2019-12-06 Thread Zwettler Markus (OIZ)
> On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ) 
>  wrote:
>> -Ursprüngliche Nachricht-
>> Von: Michael Paquier 
>> Gesendet: Freitag, 6. Dezember 2019 02:43
>> An: Zwettler Markus (OIZ) 
>> Cc: Stephen Frost ; 
>> mailto:pgsql-general@lists.postgresql.org
>> Betreff: Re: archiving question
>> 
>> On Thu, Dec 05, 2019 at 03:04:55PM +, Zwettler Markus (OIZ) wrote:
>> > What do you mean hear?
>> >
>> > Afaik, Postgres runs the archive_command per log, means log by log by log.
>> >
>> > How should we parallelize this?
>> 
>> You can, in theory, skip the archiving for a couple of segments and then do 
>> the
>> operation at once without the need to patch Postgres.
>> --
>> Michael
>
>
>Sorry, I am still confused.
>
>Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory away 
>and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?
>
>No, *absolutely* not.
>
>What you can do is have archive_command copy things one by one to a local 
>directory (still sequentially), and then you can have a separate process that 
>sends these to the archive -- and *this* process can be parallelized. 
>
>//Magnus
 


That has been my initial question.

Is there a way to tune this sequential archive_command log by log copy in case 
I have tons of logs within the pg_xlog directory?

Markus



Slow planing...

2019-12-06 Thread Mladen Marinović
Hi,

Since this morning our system is running slower than usual. It turns out
that some queries take a very long time to plan ( > 1 second). The problem
occurs when joining bigger tables. There are no partition for the used
tables. The problem has a time correlation with the last
autovacuum/autoanalyse this morning, but manual vacuuming and analysing did
not fix the problem.

An example explain is:

EXPLAIN ANALYSE
SELECT 1
FROM table_a a
  LEFT JOIN table_b bON b.a_id= a.id
WHERE a.object_id=13
  AND a.timestamp<'2019-12-06'
  AND a.timestamp>'2019-12-03'

Nested Loop Left Join  (cost=1.28..18137.57 rows=6913 width=4) (actual
time=0.043..90.016 rows=14850 loops=1)
  ->  Index Scan using uq_object_id_timestamp on table_a a
 (cost=0.70..7038.49 rows=6913 width=8) (actual time=0.028..21.832
rows=14850 loops=1)
Index Cond: ((object_id = 13) AND (timestamp <
'2019-12-06'::timestamp with time zone) AND (timestamp >
'2019-12-03'::timestamp with time zone))
  ->  Index Only Scan using table_b_a_id on table_b b  (cost=0.57..1.60
rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=14850)
Index Cond: (a_id = a.id)
Heap Fetches: 0
Planning time: 1908.550 ms
Execution time: 91.004 ms

The same query on a similar parallel system takes 5ms for planing (PG
9.4.).

Is there a way to detect why the planing is taking this long?

The database is a 9.6.1 with 32GB of shared_buffers, and 1GB of
maintanance_work_mem, and machine CPU is below 80% all the time.

Regards,
Mladen Marinović


Re: archiving question

2019-12-06 Thread Magnus Hagander
On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:

> > -Ursprüngliche Nachricht-
> > Von: Michael Paquier 
> > Gesendet: Freitag, 6. Dezember 2019 02:43
> > An: Zwettler Markus (OIZ) 
> > Cc: Stephen Frost ;
> pgsql-general@lists.postgresql.org
> > Betreff: Re: archiving question
> >
> > On Thu, Dec 05, 2019 at 03:04:55PM +, Zwettler Markus (OIZ) wrote:
> > > What do you mean hear?
> > >
> > > Afaik, Postgres runs the archive_command per log, means log by log by
> log.
> > >
> > > How should we parallelize this?
> >
> > You can, in theory, skip the archiving for a couple of segments and then
> do the
> > operation at once without the need to patch Postgres.
> > --
> > Michael
>
>
> Sorry, I am still confused.
>
> Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory
> away and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?
>
>
No, *absolutely* not.

What you can do is have archive_command copy things one by one to a local
directory (still sequentially), and then you can have a separate process
that sends these to the archive -- and *this* process can be parallelized.

//Magnus


AW: archiving question

2019-12-06 Thread Zwettler Markus (OIZ)
> -Ursprüngliche Nachricht-
> Von: Michael Paquier 
> Gesendet: Freitag, 6. Dezember 2019 02:43
> An: Zwettler Markus (OIZ) 
> Cc: Stephen Frost ; pgsql-general@lists.postgresql.org
> Betreff: Re: archiving question
> 
> On Thu, Dec 05, 2019 at 03:04:55PM +, Zwettler Markus (OIZ) wrote:
> > What do you mean hear?
> >
> > Afaik, Postgres runs the archive_command per log, means log by log by log.
> >
> > How should we parallelize this?
> 
> You can, in theory, skip the archiving for a couple of segments and then do 
> the
> operation at once without the need to patch Postgres.
> --
> Michael


Sorry, I am still confused.

Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory away 
and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?

Markus









Tuple concurrency issue in large objects

2019-12-06 Thread Shalini

Hi all,

I am working on a project which allows multiple users to work on single 
large text document. I am using lo_put to apply only the diff into the 
large object without replacing it with a new lob. While working on it, I 
encountered an error "Tuple concurrently updated".

The error can be reproduced with two psql clients.

Setup:

mydb=# create table text_docs(id serial primary key, data oid);
CREATE TABLE
mydb=# insert into text_docs(data) select lo_import('./upload.txt');
INSERT 0 1
mydb=# select * from text_docs;
 id |  data
+-
  1 | 5810130
(1 rows)

Now, if we open two psql clients and execute the following commands:

Client 1:

mydb=# begin;
BEGIN
mydb=# select lo_put(5810130, 10, '\xaa');
UPDATE 1

Client 2:

mydb=# select lo_put(5810130, 10, '\xaa');

Client 1:
mydb=# commit;
COMMIT

Client 2:
mydb=# select lo_put(5810130, 10, '\xaa');
ERROR:  tuple concurrently updated

Is there a workaround to this concurrency issue without creating a new 
large object?


Regards
Shalini






Insert Table from Execute String Query

2019-12-06 Thread İlyas Derse
I need to insert temp table from execute string query. How can I do ? I'm
trying like that but not working.

CREATE OR REPLACE FUNCTION public.testdyn
(
x integer
)
RETURNS TABLE
(
id bigint,
text character varying(4000)
)
AS $$
DECLARE mysql TEXT;
BEGIN
create temp table tmp1
(
id1 bigint,
text character varying(4000)
);
mysql = 'select id, text from TEST';

RETURN QUERY
  EXECUTE mysql INTO tmp1 ;
END;
$$ LANGUAGE plpgsql;